The internet is no longer tasteless

Well, that is to say, we can now transport certain flavors over the internet at least experimentally.  Several groups of scientists have been working on techniques for recording, transmitting, and reproducing taste sensations, apparently with some success.

These approaches simplify flavor into five major categories:  “sodium chloride for salty, citric acid for sour, glucose for sweet, magnesium chloride for bitter and glutamate for umami.”

[One] system uses sensors to detect the levels of these chemicals in food, converts them to digital readings, and then sends these values to the pump, which pushes small amounts of different flavour-containing hydrogels into a small tube under a person’s tongue. [See source.]

Naturally, the technological execution is also in flux at this stage.  The design mentioned above involves a small cube that dangles outside your mouth with a sensory tab that fits under your tongue.  Alternatively, another approach has a “rod that resembles a hand-held microphone with a surface that’s designed to be licked rather than talked into.”  And the same inventor (who by the way, also gave us the “electric fork”) is also working on a “lickable screen” for incorporation into a cell phone.  [See source.]

These approaches rely on a good understanding of gustatory processing—the subprocesses that go on in your mouth and brain whenever you taste something.  It all begins with your taste buds, then, when food dissolves in saliva in your mouth, a chemical signal is transduced to an electrical signal which is interpreted by your gustatory cortex (a region of your brain) to identify what you have just tasted.  [See source.]

Of course, numerous studies have also proven that how food looks and smells also affects what we think we have tasted.

Expanding the palette of sensory experience available via the internet may be inevitable.  Think how seamlessly many people adopted virtual meetings involving a video feed during the pandemic.  Tactile sensations are also available via haptic gloves and similar devices, particularly for on-line gamers and engineers.  However, more and more devices are remotely operated (think of drones!) with sensitive interface technology that provides resistance and other feedback directly to the operator.  One such company explains the benefit as follows:

By accurately simulating detailed tactile feedback and physical resistance in hands-on training scenarios, organizations accelerate procedural learning for complex manual tasks such as surgery, manufacturing processes, materials handling, equipment maintenance, and more.  [See source.]

Virtual reality headsets improve the visual experience of a flat screen by projecting slightly different views into each eye so as to create the illusion of three-dimensional depth while motion sensors track head movement and adjust the image viewed accordingly.  Naturally, the resolution and quality of the image viewed can addd or detract from the realism, but the technology, while bulky, is effective, not to mention pricey.  Some systems also incorporate sound and even haptic signals, too.

Odors, on the other hand, remain somewhat illusive.  Evolutionary biologists suggest that our emotions and the organs in our brain that manage them evolved from early animals’ olfactory processing capabilities.  A smell told primordial creatures instantly whether there was a predator, potential mate, or perhaps something to eat near by.  So it is no wonder that our sense of smell is so nuanced.  Which brings us to the electronic nose. “Most electronic noses use chemical sensor arrays that react to volatile compounds on contact: the adsorption of volatile compounds on the sensor surface causes a physical change of the sensor. A specific response is recorded by the electronic interface transforming the signal into a digital value.”  [See source.]  Once recorded, the olfactory data is then processed via various algorithms to deduce what specifically has been smelled.

The e-nose, however, is bulky and typically single-purpose as of yet—used mainly for industrial applications such as identifying the presence of a specific gas, monitoring food quality during manufacturing, or for medical diagnostic purposes.  In other words, it may be some time until the e-nose goes mainstream.

Some, notably Ericsson, are focusing on the Internet of the Senses (IoS), an interlinked array of technologies enabling realistic virtual experiences.  The company proposes this vision:

Looking forward to the technology development over the coming decade, we expect devices, sensors, and actuators as well as context-aware applications and network enablers to enable these experiences to become richer, involving all our senses, and ultimately merging the digital and the physical worlds. We call this experience the Internet of Senses. [See source.]

On the other hand, why wait?  Why not simply plug directly into the brain and be done with all the technological add-ons?  Indeed, some companies are plunging ahead in this area.  Apparently the complexity of our brains requires a very high bandwidth and this is currently best achieved by embedding a device deep in the brain: “Proximity to the neurons and a greater number of electrodes that can “listen” to their activity increases the speed of data transfer, or the “bandwidth.” And the greater the bandwidth, the more likely it is that the device will be able to translate brain activity into speech or text.” [See source.]

Many of the companies active in this invasive, direct neural link approach are doing so allegedly to aid individuals who cannot easily move or speak, and experimentally, they are making vast progress.  For example, while Neuralink may command more headlines, Blackrock Neurotech’s equipment has been implanted in dozens of people since 2004 receiving the “FDA Breakthrough Designation in 2021.” [See source.]  On the less-invasive end of the scale, “Precision Neuroscience, founded by a former Neuralink executive, has developed a flexible electrode array thinner than a human hair that resembles a piece of Scotch tape. It slides on top of the cortex through a small incision.”  [See source.]

But what about a wearable device that can read and interact with your brain?  “There are two challenges to realizing a noninvasive BCI [brain-computer interface] device: identifying a signal in the brain that could provide insight into when and where neural activity occurs, and demonstrating the ability to record this signal through the scalp and skull of a person[…]”  [See source.]  A team at Johns Hopkins has demonstrated success by illuminating the brain (through the scalp and skull) with laser light and reading the deformations that occur during neural activity.  Naturally, isolating a specific “thought” from the “chatter” represents a further challenge, for which the team assembled experts with experience in “biomedical imaging, underwater imaging, acoustic processing real-time hardware and software systems, neuroscience and medical research.” [See source.]  This non-invasive approach has a long road ahead but shows great promise in eventually providing the proverbial “thinking cap”.

Well, we may have exaggerated slightly in our headline, but the point is that immersive virtual experience is likely to become available in the future.  Suppose you want to taste a five-course meal but take in zero calories.  Think about the opportunities for restaurants to offer virtual menus (without the cost of ingredients or wasted food) or weight-loss firms to help you enjoy dieting.  Or imagine sharing dinner with friends located on another continent without having to travel?  Think of the implications for the travel, entertainment, food, beverage, and/or weight-loss industries—just to name a few.

If you need help imagining that future, we may be able to help.

Previous
Previous

Time-traveling avatars censored for reciting children’s poem

Next
Next

Talk at me some more…