Many of the references from the Sonification Handbook (SH)
(From the SH chapter on Audification) Kramer defines audification as “the direct translation of a data waveform into sound.” Indeed, Dombois and Eckel point out in the SH that the data does not even need to be sound related data as evidenced by this audification of the shock waves associated with an earthquake: Audification Earthquake
(From the SH chapter on Earcons) Blattner et al. defined Earcons as: “non-verbal audio messages used in the user-computer interface to provide information to the user about some computer object, operation, or interaction”. Brewster further refined this definition as: “abstract, synthetic tones that can be used in structured combinations to create auditory messages”.
This Earcon is for Opening any application: “Open” Earcon
This Earcon is specifically for the Paint application: “Paint” Earcon
This Earcon indicates that the user should “Open” the “Paint” application and is a combination the two sounds: “Open Paint Application” Earcon
Spearcons are a combination of speech and earcons (Walker & Nance, 2006) in that they consist of speeding up a spoken phrase (very recognizable) until it is not recognized as speech (more like an icon).
This spearcon is of the address book entry for Braxton Philbrick
This spearcon is of the word elephant
This spearcon is of the word elevator
Sonification is a broad term and Hermann, Hunt, and Neuhoff (SH Chapter 1) offer the general definition for sonification as “the technique of rendering sound in response to data and interactions.” There are several methods of doing this. Many of these are outlined in the Sonification Handbook and a specific example here is a sonification of weather data.