Meta recently announced a long-term research partnership to study the human brain. According to the company, it intends to use the results of this study to “guide the development of AI that processes speech and text as efficiently as people.”
This is the latest in Meta’s ongoing quest to perform the machine learning equivalent of alchemy: producing thought from language.
The big idea: Meta wants to understand exactly what’s going on in people’s brains when they process language. Then, somehow, it’s going to use this data to develop an AI capable of understanding language.
According to Meta AI, the company spent the past two years developing an AI system to process datasets of brainwave information in order to glean insights into how the brain handles communication.
Now, the company’s working with research partners to create its own databases.
Per a Meta AI blog post:
Our collaborators at NeuroSpin are creating an original neuroimaging data set to expand this research. We’ll be open-sourcing the data set, deep learning models, code, and research papers resulting from this effort to help spur discoveries in both AI and neuroscience communities. All of this work is part of Meta AI’s broader investments toward human-level AI that learns from limited to no supervision.
The plan is to create an end-to-end decoder for the human brain. This would involve building a neural network capable of translating raw brainwave data into words or images.
That sounds pretty rad, but things quickly veer into stranger territory as the blog post continues beneath a subheading titled “Toward human-level AI”:
Overall, these studies support an exciting possibility — there are, in fact, quantifiable similarities between brains and AI models. And these similarities can help generate new insights about how the brain functions. This opens new avenues, where neuroscience will guide the development of more intelligent AI, and where, in turn, AI will help uncover the wonders of the brain.
Meta appears to be following in OpenAI’s footsteps here. Both companies have a vested interest in developing artificial general intelligence (AGI) — or an AI that’s generally capable of doing anything a human is.
OpenAI claims AGI is its sole mission, while Meta seems to be more of a dabbler while it’s focused on building the metaverse.
But they’re both going about it the same way: by trying to back into it through natural language processing (NLP)
It’s unclear how predicting speech from brainwaves will lead to human-level speech recognition. Just as it’s unclear how GPT-3, or any future text generators, will lead to AGI.
There’s an argument to be made that, in lieu of a clear goal, researchers are merely trying to solve problems in the general area of human understanding on the way to the eventual promise of AGI.
But there’s also the idea that deep learning isn’t robust enough to imitate or emulate the human brain sufficiently for the development of machines capable of human-level reasoning.
At the end of the day, Meta’s work in developing machine learning models to parse brain activity is important. It’s possible it could be useful in furthering our understanding of how the brain functions.
But it seems a bit far-fetched to frame the endeavor as part of the pursuit toward machines capable of human-level reasoning. They’re not teaching AI to understand speech, they’re teaching it to predict brainwave activity.
Based on the accompanying research cited in the company’s announcement, it would appear that Meta’s no closer to finding the secret sauce that’ll turn data-based insights into something that endows vital warmth to AI than Tesla or OpenAI is.
Tesla AI might play a role in AGI, given that it trains against the outside world, especially with the advent of Optimus
— Elon Musk (@elonmusk) January 19, 2022
It’s about time big tech stopped framing every AI advancement as the direct bridge to the sentient robots of tomorrow. And, perhaps, it’s also time to consider a different approach to AGI.
Source Link: https://thenextweb.com/news/metas-new-long-term-ai-study-sounds-like-openais-current-dead-end