icadLogo 2560px-Canberra_From_Black_Mountain_Tower.jpg
  THE 22nd ANNUAL INTERNATIONAL CONFERENCE ON AUDITORY DISPLAY
SONIC INFORMATION DESIGN
Australian National University, Canberra 3-7 July 2016

 
     ICAD | Home Call for Participation Submission Info Programme Location Travel&Accom ThinkTank Workshops Concert     
     Key Dates Online Registration Submit Contribution KeyNotes Installations Recreation Banquet Proceedings Contacts     
  Conference closed. Download Proceedings. Visit the ICAD 2017 site: icad.org/icad2017.    

 

WORKSHOPS for ICAD2016

  Morning Workshops (9am–noon)

 

Using Layer-Based Amplitude Panning (LBAP) Audio Spatialization Algorithm and D4 Max Library's Rapid Prototyping Tools for the Interactive 3D Audification and Sonification of Multidimensional Data, and in Interactive Music Scenarios in High Density Loudspeaker Arrays.

  Presenter: Assoc Prof Ivica Ico Bukvic
Virginia Tech
Questions?: email ico@vt.edu

Participants: 10-20
Location: ANU School of Music
Requirements: Laptop with Max/MSP and D4 Max Library.

This workshop focuses on the new LBAP algorithm and the supporting D4 Max library's collection of tools for real-time 3D audification, sonfication, and interactive performance in High Density Loudspeaker Array (HDLA) scenarios. It will cover theory and application of the said tools, including its unique features, including the Spatialization Mask, Motion Blur, Radius, and advanced shape painting and rendering. Participants will be given an hands-on opportunity to experiment with the software and learn how to create a scalable and transportable systems that can adapt to most speaker configurations.

Of particular focus will be learning ways to import and integrate multidimensional data and render it spatially in real-time. So far, LBAP and D4 have been successfully tested with configurations of up to 137 loudspeakers and up to 1,011 concurrent 24-bit 48KHz audio streams with sub 22ms latency. The workshop may be of particular interest due to its focus on 3D spatial auditory displays using HDLA. It will be the first public workshop on a newly developed technology and is being offered in addition to a paper submitted to the conference.

Given the focus on multichannel audio, aural components of the workshop will leverage existing audio infrastructure while also providing alternative visual tools for experiencing spatialized audio. While participation without obtaining the tools is possible and encouraged, for optimal experience participants should consider obtaining the software online at http://ico.bukvic.net/main/d4/ or sold on site—special pricing will be provided workshop participants. For additional info on the software and pricing, please contact the author. Considering the D4 library is specifically designed for the Max environment, participants who decide to obtain the D4 library will need a Max license and a laptop (Windows or Mac) with the up-to-date 64bit Java libraries installed.

 

 
 

Dare to Design Hearables

  Presenters: Prof Simon Carlile,
Starkey Hearing Technologies,
email: simon_carlile@starkey.com
A/Prof Stephen Barrass,
University of Canberra,
stephen.barrass@canberra.edu.au
Jane Cockburn,
Kairos Now
jane@kairosnow.com.au

Participants: 10-20
Location: ANU School of Music
Requirements: Imagination

This workshop will provide participants with the opportunity to collaborate in the exploration of Hearables, which are next generation augmented hearing aid technologies.

The current early incarnation of hearables combines activity tracking technologies with audio playback or streaming using earbuds. These provide an opportunity to deliver new and convergent technology without the need to change consumer behaviour but potentially also enable new use-cases, technology convergence and services.

The key questions include:(1) What can/could actually be achieved technically given the price point and regulatory and other frameworks (the platform)? (2) What needs or wants might these technologies actually satisfy (the product)? (3) Who would pay for this and how would they pay for it (the monetization)? (4) How would the business model be sustained in the face of competition, technical commoditization and market saturation (the business case)?

Participants will learn about the state of the art in this area, and opportunities for industry collaboration and open source development of future products in this space. They will form groups and participate in an iterative design thinking process using personas, journey maps and reframing techniques. By the end of the workshop groups will develop a proposal that could be the basis for a grant or industry partnership.

 

 
 

Computer Knitted Data Scarf

  Presenter: Prof Angelina Russo
University of Canberra
angelina.russo@canberra.edu.au
Participants: 5-10
Location: ANU School of Music
Requirements: Your own data-set

Your ICAD registration includes a 'data beanie' knitted from possum fur and designed to keep your head warm in the frosty mid winter mornings in Canberra.

This beanie is computer knitted from punch-cards that encode data logged from seals diving under the antarctic as part of Nigel Helyer's sonification concert piece Biologging Retrofit. During the ICAD concert, attendees will hear a sonification of the data that is knitted into their beanie!

http://www.sonicobjects.com/index.php/projects/more/biologging_retrofit/

The workshop will provide an overview of computer knitting processes, techniques and technologies. You will then have the opportunity to map your own dataset into a computer punchcard to control a knitting machine. For example ou could map daily temperature from your home city as a pattern http://www.worldweatheronline.com/canberra-weather-averages/australian-capital-territory/au.aspx. Then choose colours to design your own personal Data Scarf to keep the mid winter chill at bay.

Although each person is applying the same process, different design decisions will result in different scarves. You can map the data into a pattern in many different ways, and even whether to use colours to match your beanie. The mapping of the data onto the pattern and the choice of colour and contrast has a big effect on the final fashionability and desirability. We know because designing the data beanie has required many iterations to achieve something that we hope you will like to wear.

If you like your scarf pattern you can order it, and it will be knitted during the conference.

 

 
 

Data Sonification using Python and Csound

  Presenter: Prof David Worrall
Audio Arts and Acoustics Department
Columbia College Chicago
dworrall@colum.edu

Participants: up to 16
Location: ANU School of Music Computer Lab
Requirements: Your own laptop preferred. Be prepared: Pre-workshop preparations.

Python is a popular, easily learnt general-purpose programming language which can serve as a glue language to connect together many separate software components in a simple and flexible manner. Widely used in the scientific community, It can also be used as a high-level modular framework for controlling low-level operations implemented by subroutine libraries in other languages.

Csound arguably has the widest, most mature collection of tools for sound synthesis and sound modification. There are few things related to audio-programming that you cannot do with Csound; it can be used in real-time to synthesise sound or process live audio or other control data (including MIDI and OSC) on the fly. It can be used to render sound on hand-held and other mobile devices or, when synthesis needs require it, sound can be rendered to file.

The Python API (Application Programming Interface) to Csound is robust and available on all hardware platforms. The aim of this workshop is to provide a hand's on introduction to producing software data sonifications using a combination of these most powerful, open-ended, and extensible set of tools. If required, it will be divided into sessions on Python, on Csound individually, and then in combination.

The workshop will begin with begin with a detailed description of a non-trivial application example.

NB See the pre-workshop setup instructions.

 

 

 Afternoon Workshops (1pm–4pm)

 

Neurofeedback and Contemplative Interaction.

  Presenter: Dr George Poonkhin Khut
UNSW Australia | Art & Design
Participants: 5-10
Location: National Portrait Gallery

George Khut will provide an overview of his recent works with Alpha (brainwave) neurofeedback. Participants will have the opportunity to interact with the brainwave-controlled artwork in the gallery, take a 'behind-the-scenes' tour of the sound design and sonification strategies used in this work (using Cycling74's Max), and discuss questions raised by this work: how is this kind of interaction similar to, and different from, traditional contemplative practices such as trance, yoga and qigong meditation? How might a consideration of these issues inform the design of future health apps and services?

 

 
 

Hack Your ICAD NameBadge

  Presenter: Tim Barrass, Mozzi
barrasstim@gmail.com
Participants: 20
Location: ANU School of Music
Requirements: ICAD name badge, laptop,
downloads:
     Arduino 1.0.5,
     Mozzi library

Update: Unfortunately the Mozzi Badge did not get finished in time for the conference. However we have some working prototypes, and some Mozzi learning kits, so the workshop can go on :)

The workshop will introduce the Mozzi programming library, and describe the process of hardware fabbing using Fritzing that has got to the stage of a MozziDuino hardware prototype.

The MozziDuino is an Arduino Clone with onboard sound amplifier and speaker, a light sensor and an extremely sensitive electrostatic sensor.
It has a USB port that allows you to program the Arduino to synthesise sounds using the Mozzi Synthesis library
https://sensorium.github.io/Mozzi/

Have a look at the Gallery (https://sensorium.github.io/Mozzi/gallery/) to see how Mozzi has been used for art installations, museum exhibits, music performances, boutique synthesisers and custom special effects units The workshop will be led by Tim Barrass, the inventor of Mozzi, and the MozziDuino.

Upon completion of the workshop you will

  • Understand the capabilities of the MozziDuino wearable sonification synth.
  • Be able to program interactive sonifications of sensors with Mozzi.
  • Be able to add your own sensors and sounds to the MozziDuino.

 

 
 

Biologging Retrofit

  Presenter: Dr Nigel Helyer
www.sonicobjects.com
sonic@sonicobjects.com
Participants : 4
Location: ANU School of Music
Requirements: Willingness to Perform in the ICAD Concert.

This workshop is a rehearsal for 4 participants to perform the Biology Retrofit concert piece, which is part of an experimental installation and performance series that will visualise and sonify scientific and statistical datasets.

The complex bio-logging data collected by Southern Elephant Seals on their dives in the Antarctic (and collated as Surface Wind Speed, Depth with Salinity, Depth with Temperature and Ocean Bottom with Bottom Density) are transcribed onto the punch paper music-box system. This a crude but effective Digital to Analogue sonification of data values. This simple prototype illustrates the potential to render tens of simultaneous data streams onto a pianola or disc-klavier for 'live' performance.

The participants will be introduced to the project background, the dataset, and the performance. They will then rehearse the playback of the piece on 4 punchcard controlled music boxes. During the workshop participants will collaborate in the design and staging of the performance for the concert.

 

 
 

Transposed Dekany

  Presenter: Dr Greg Schiemer
greg@schiemer.com.au
Participants: 16 (minimum), 80 (maximum)
Location: ANU School of Music
Requirements: iPhone, Satellite Gamelan App, willingness to Perform in the ICAD Concert.

A workshop-rehearsal that leads to a performance by a consort of eighty iPhones. Participants will gain experience of using a distributed mobile platform for interactive collaborative exploration of sonic materials.

The workshop will be in two parts: part one will focus on using the app and understanding its harmonic features, while part two will focus on rehearing for the concert. Participants will explore a microtonal space created using the Satellite Gamelan app. The app will be explained in terms of how this scale is derived from pure harmonics, what are its salient harmonic and melodic properties and what textural and acoustic by products players can expect when this scale is played simultaneously in different transpositions on different instruments.

Participants are asked to download the the latest version of the Satellite Gamelan prior to the workshop from iTunes free-of-charge.

Participants are encouraged to watch a concept video submitted for the Space Time Concerto competition in 2012 when the app was first used in a performance involving linked concert venues. View video.

 

 

 

The original Call for Workshop Proposals is available for reference.

 

If you're coming down under, why not make the most of it with an ICAD/NIME double ?