Google has announced a new technology called LIMoE, which it says is a step towards Google’s goal of an AI architecture called Pathways.

Pathways is an artificial intelligence architecture that is a model that can learn to perform many tasks that are currently performed by multiple algorithms. LIMoE is an acronym that stands for Multi-Modal Learning with a Sparse Mixture-of-Experts Model. It is a model that processes vision and text together.

Google LIMoE: The Revolution of Artificial Intelligence

While other architectures do similar things, the success lies in the way LIMoE accomplishes these tasks using a neural network technique called the Sparse Model. This model was described in a 2017 research paper that introduces the Mixture-of-Experts Layer (MoE) method, in a research paper titled Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer.

In 2021, Google announced an MoE model called GLaM: Efficient Scaling of Language Models with Mixture-of-Experts Trained on Text Only. The difference with LIMoE is that it works with text and images at the same time.

A sparse model differs from “dense” models because instead of assigning each part of the model to perform a single task, a sparse model assigns the task to several “experts” who specialize in a part of the task. This will reduce the computational cost and make the model more efficient by utilizing digital human technology.

So, just as the brain sees a dog and knows it’s a dog, that it’s a husky, and that the husky has a white coat, this model can also look at an image and perform a task in the same way, by assigning computational tasks to different experts specialized in the task identify the dog, its breed, its color, etc. all the time.

The LIMoE model is a Google AI model that directs problems to “experts” who specialize in a particular task and achieve similar or better results than existing problem-solving methods. An interesting part of the model is how some experts specialize primarily in image processing, others specialize primarily in text processing, and some experts specialize in both.

How LIMoE helps meet Google’s goal of an AI architecture called Pathways

Pathways is a new way of thinking about artificial intelligence that addresses many of the weaknesses of existing systems and synthesizes their strengths. To show you what I mean, let’s examine some of the shortcomings of current AI and how you can improve your journeys.

Today’s AI systems are often trained from scratch on each new problem—the parameters of the mathematical model are initialized with random numbers. Imagine that every time you learn a new skill (such as jumping rope), you forget everything you’ve learned—how to balance, how to jump, how to coordinate hand movements—and start learning any new skill from scratch.

This is how we train most machine learning models today. Instead of extending existing models to new tasks, we train each new model to do one thing and one thing only (or sometimes we specialize in one general model for a particular task). As a result, we end up developing thousands of models for thousands of individual tasks. Not only does it take longer to learn each new task this way, but it also requires more data to learn each new task because we’re trying to learn everything about the world.

Instead, Google wants to train a model that not only can handle many different tasks but also uses and integrates its existing skills to learn new tasks faster and more efficiently. What the model learns by training on one task—say, learning how aerial photographs can predict the elevation of a landscape—can help determine another task—say, predicting how floods will flow through that land.

The LIMoE model has several capabilities that can be harnessed and modified as needed to perform new, more complex tasks—something close to the way the mammalian brain generalizes tasks. And of course, an AI model doesn’t have to be limited to these familiar senses; Pathways can process more abstract forms of data, helping to find useful patterns that have eluded human scientists in complex systems like climate.

The caveats of LIMoE architecture

This architecture has its drawbacks, which are not mentioned in Google’s announcement but are discussed in a research paper published by the internet giant.

The research paper states that, like other large models, LIMoE can also introduce bias into the results. The researchers say they “clearly” did not address the problems inherent in large models. This 2021 research paper warns how AI technologies can cause negative impacts on society, such as inequality, abuse, economic and environmental impact, and legal and ethical considerations.

Behavioral issues can also arise from a tendency to homogenize tasks, which can represent a point of failure that is then reflected in other tasks that follow below. A cautionary research paper states: “The meaning of fundamental models can be summed up in two words: emergence and homogenization. Randomness means that the system’s behavior is not clear; it is a source of scientific excitement and fear of unintended consequences.

Another cautionary note related to the vision-related AI advances is accuracy and distortion issues. There is a history of learned biases in computer vision models, which leads to lower accuracy and correlated errors for underrepresented groups, consequently leading to inappropriate use in some real-world environments.

Achieving true artificial intelligence with LIMoE

If you were wondering “what is true artificial intelligence?” at the beginning of this piece, we hope that we’ve managed to enlighten you so that you can use this tool to your advantage. True artificial intelligence is by nature self-sufficient, requiring very little human intervention. With the launch of LIMoE, we are one step closer to seeing this truly become a living reality.