“Liquid” Neural Network Adapts on the Go

In the realm of artificial intelligence, bigger is supposed to be better. Neural networks with billions of parameters power everyday AI-based tools like ChatGPT and Dall-E, and each new large language model (LLM) edges out its predecessors in size and complexity. Meanwhile, at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), a group of researchers have been working on going small.

In recent research, they demonstrated the efficiency of a new kind of very small—20,000 parameter—machine learning system called a liquid neural network. They showed that drones…

Continue Reading


News Source: spectrum.ieee.org


Posted

in

by

Tags: