Skip to content

Algorithmic Pattern Seeking?

July 15, 2009
by

(cross posted at cog and sprocket and alekseistevens.com)

My wife does a lot of work with data visualization, in which she extracts patterns and stories from large data sets and represents them visually through graphic design or animation.  It’s a really interesting and elucidating way of making sense out of huge amounts of information; in a sense, figuring out what stories the data tell on their own, rather than using data to support a preconceived idea.  Data visualizations can be <a href=”http://bit.ly/1TpSTt”>pragmatic</a&gt;, or <a href=”http://bit.ly/12Ld0R”>artistic</a&gt;, and are often both.

I was thinking recently, though, about all the patterns that must exist in data that we don’t even know to look for, and what kinds of interesting stories they might tell us.  I am not a computer programmer, but it seems it must be possible to develop some kind of algorithmic pattern-seeker.  The human mind is constantly on the lookout for patterns, but has a pretty low fidelity, which is why we see Jesus in tree stumps and burnt toast and water stains, and why we think more weird stuff happens during a full moon than at other times, and all kinds of other very human logical fallacies. A machine, on the other hand, would not be susceptible to confirmation bias and other such pitfalls.

I wonder whether the next big development in data visualization is going to come when we can just feed enormous amounts of data into a system that will make its own sense out of it rather than requiring some kind of human intervention telling it what to look for.  This would be a boon both to artists and to scientists – to everyone concerned with parsing data and finding the larger truths they represent.  (If you know of a project like this that already exists or is in the works, please let me know in the comments!)

Data <em>auralization</em> doesn’t quite have the same ring to it (no pun intended), but it is something people do.  Sound artist and kinetic sculptor <a href=”http://bit.ly/yV9l7″>Trimpin</a&gt;, for example, created an installation in which sounds and musical robots are controlled by a live incoming stream of seismic data.  Another sound artist, Andrea Polli, created a <a href=”http://bit.ly/uaFr8″>piece</a&gt; that maps climate data to different sonic parameters in an algorithmic composition.  I myself have done interactive performance works where aspects of a player’s improvisation (eg pitch, loudness, tempo, number of attacks, etc) control an algorithmically generated electronic counterpoint.

But in all these cases, the computer is programmed to look at the incoming data stream (or the input data set) in a very particular way.  What might we end up hearing or seeing if the computer is allowed to look using its own logic?  What might we learn that we never would have zeroed in on left to our own devices?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: