Previously, I’ve discussed how data visualization matches the definition of language. So, where do we go from there?
In American Sign Language (ASL), signs are broken down into smaller parts to describe them linguistically. Remember, there are no phonemes (with versus them, why is the ‘th’ different?). Instead, we discuss things like handshape, movement, contact region, palm orientation, and non-manual markers (facial expressions, as an example). Free sign language lesson: Soon and train have the same handshape, but different palm orientation and contact region. To add confusion, we typically call these parameters (and you thought this was going to be straight-forward).
We use these things to analyze the language and understand not just what makes a sign, but how signs are used grammatically and thematically. ASL poetry, for example, will often use handshape themes to create rhyme.
Within data visualization, we also have these parts, or parameters. We usually call them fancier names, but you came to this blog. There are smarter people doing this the right way. I swear, I’ll make a list. I’m looking at this through the lens of ASL as a slightly different way.
So how do I think about data visualization linguistically?
At Tapestry, my poster will do a deeper dive into these. Yes, I’m baiting you to come see my poster. Some kids beat you up for lunch money, others whine and beg, and some real oddballs write blogs. The perils of moderns times, folks, I assure you. I offer some scary rabbits and a Led Zepplin-like promise that all will (slowly) be revealed. Not going to Tapestry? It’ll end up here.