language, and spatial and temporal awareness


boyfren shared a video, and we discussed a bit

The Cognitive Tradeoff Hypothesis (youtub)

the concept proposed is that spatial awareness (that is, the ability to note and keep track of multiple objects within a given space) has somehow in our history been compromised or supplanted by the development of language, which in turn allowed us a better temporal awareness (that is, location of self with respect to past and future events, and the nature of causality)

my first response, after watching, was a concern over methodology. these chimpanzees have trained themselves to perform well on this puzzle, whereas the guest in this video, and any other test subjects, will be approaching it for the first time, or at the least will not have that same long-term background. furthermore, the untrained adult (30-something?) guest is able to beat the well-trained chimpanzee adult on his first try, losing only to a much younger chimp who is admitted to be the best of the group

of course it’s probable that more rigorous testing has been done at this facility and just isn’t presented here. still, i would be very sceptical of any results that don’t come from pitting these different chimpanzees against closer human analogues, which in this case would mean, say, that アユム (the young champion) should be pitted against a video-game-playing 18 or 19 year old who’s had a while to familiarise with these tasks

that being said, it is a very interesting proposition, and one i’ve thought about myself a lot. for some years now, i’ve been running “tests” on myself to get a sense of what sorts of tasks do and don’t conflict with my brains language processing abilities. the most directly relevant of these tests have been those involving simultaneous spoken language (audiobooks or lectures) and video games. (please excuse the performance anxiety in all the following videos)

here is a game called towerfall:

this game is an instance of exactly what professor 松沢 described, where the player is presented with a large set of enemies at once (progressively more and more over the course of the level, with this being only the beginning) and must maintain an awareness of each of their positions and movements, weaving among them and launching gravity-affected projectiles at the right moments to hit them at their future positions

this seems like a lot to handle, but i’ve found that playing it does not noticeably impede my ability to process and retain spoken language, or vice versa. and i’ve tested it against things as various as Robert Graves’ The Greek Myths, a lecture series on bacteria and microbes, Iain Banks’ culture novels, which can often require visualising unfamiliar objects and creatures, and even Jeanette Winterson and various poets

this same multi-tasking ability extends to other sorts of games as well. the puzzle game Shenzen I/O has within it this interesting, atypical solitaire.

in this game, along with the typical sort of stacking cards according to numerical weight, there are three sets of four “dragons”, each set of which must be surfaced and put away all together, and one joker card that is put away as soon as it is surfaced. these complications, and the lack of any “undo” feature, make it necessary to look into the future consequences of any given move and to form a strategy for accomplishing waypoint goals on the way to clearing the whole board

and this game also, despite its use of symbols with signified meanings and reasoning about possible futures, seems not to conflict at all with audible language processing

i’ve of course tested against other games, including those involving 3d spaces, competition with other humans, and written words (this seems to be the proper breaking point; any single written word or instruction can be enough to interrupt listening), but i think more than anything else these two results are key, because they imply two things: that spoken language and spatial thinking need not take place in the same region of the brain, as they can be done in tandem, and that some conceptual and temporal reasoning can also be done without, and even in parallel with, the use of language

there is a certain state of mind which one must enter in order to process and track a field of moving entities, as in towerfall or the number games in the video. the first step is to not look at anything, as this sort of pointed attention can only be given to one entity at once. instead there’s a sort of “glazing over”, where none of the given objects is focussed on, and everything is noted instead via peripheral vision

i’ve verified the usefulness of this state in “real-life” experience as well. because i haven’t got a car, i do a lot of walking, and while walking i always simultaneously read a physical book, eyes scanning the words and brain internally voicing them out, while visual and auditory surroundings are tracked and accounted for “peripherally”

this sort of unfocussed state is, i would guess, the same tool that アユム uses to note and remember 9 ordered positions in 0.5 seconds, and that the guest fails to use. pointed thought is just too slow, and it seems that language is quick to follow it (in this case maybe internally voicing “nine”, “twelve”, “next” etc).

perhaps this is the root of the difference. that modern humans have a tendency to lean on focussed, linear, language-y thought even in cases when unfocussed thought would be more performant

this raises and ties in with a lot of further questions and ideas (“what is a language?”, Howard Stephen Berg and “voiceless reading”, languages and computing, sapir-whorf, etc), but i’ll leave those off for a future post

bye for now ^_^

song of the day:

FilFla - Wst-Est