As AI research and technology development continues to advance, it also becomes necessary to consider the energy resources and infrastructure needed to manage large data sets and perform complex computations. When we look to nature for models of efficiency, the human brain stands out, ingeniously handling complex tasks. Inspired by this, Microsoft researchers are looking to understand the brain’s efficient processes and replicate them in AI.
HAS Microsoft Asia Research (opens in new tab)in collaboration with Fudan University (opens in new tab), Shanghai Jiao Tong University (opens in new tab)and the Okinawa Institute of Technology (opens in new tab)There are three notable projects underway. One introduces a neural network that simulates how the brain learns and processes information; another improves the accuracy and efficiency of predictive models for future events; and a third improves AI’s proficiency in language processing and pattern prediction. These projects, highlighted in this blog post, aim not only to improve performance but also to significantly reduce energy consumption, paving the way for more sustainable AI technologies.
CircuitNet simulates brain-like neural patterns
Many AI applications rely on artificial neural networks, designed to mimic the brain’s complex neural patterns. These networks typically replicate only one or two types of connectivity patterns. In contrast, the brain propagates information using a variety of neural connection patterns, including feedforward excitation and inhibition, mutual inhibition, lateral inhibition, and feedback inhibition (Figure 1). These networks contain densely interconnected local areas with fewer connections between distant regions. Each neuron forms thousands of synapses to perform specific tasks in its region, while some synapses connect different functional groups—that is, groups of interconnected neurons that work together to perform specific functions.
Inspired by this biological architecture, the researchers developed CircuitNeta neural network that reproduces multiple types of connectivity patterns. CircuitNet’s design features a combination of densely connected local nodes and fewer connections between distant regions, allowing for improved signal transmission through circuit pattern units (CMUs)—small, recurring patterns of connections that help process information. This structure, shown in Figure 2, supports multiple signal processing cycles, which could advance how AI systems process complex information.
The evaluation results are promising. CircuitNet outperformed several popular neural network architectures in function approximation, reinforcement learning, image classification, and time series prediction. It also achieved comparable or better performance than other neural networks, often with fewer parameters, demonstrating its effectiveness and strong generalization capabilities in various machine learning tasks. Our next step is to test CircuitNet’s performance on large-scale models with billions of parameters.
Spiking Neural Networks: A New Framework for Time Series Prediction
Spiking neural networks (SNNs) are emerging as a powerful type of artificial neural network, known for their energy efficiency and potential application in fields such as robotics, edge computing, and real-time processing. Unlike traditional neural networks, which process signals continuously, SNNs fire neurons only when they reach a specific threshold, generating spikes. This approach mimics how the brain processes information and conserves energy. However, SNNs are not very good at predicting future events based on historical data, a key function in industries such as transportation and energy.
To improve the predictive capabilities of the SNN, researchers proposed a SNN Framework Designed to predict trends over time, such as electricity consumption or traffic patterns, this approach exploits the efficiency of spiking neurons in processing temporal information and synchronizes time series data (collected at regular intervals) and SNNs. Two encoding layers transform the time series data into sequences of spikes, allowing SNNs to process them and make accurate predictions, as shown in Figure 3.
Tests show that this SNN approach is very effective for time series prediction, often matching or outperforming traditional methods while significantly reducing energy consumption. SNNs successfully capture temporal dependencies and model time series dynamics, providing an energy-efficient approach that closely matches how the brain processes information. We plan to continue exploring ways to further improve SNNs based on how the brain processes information.
Refining SNN sequence prediction
While SNNs can help models predict future events, research has shown that their reliance on spike-based communication makes it difficult to directly apply many techniques from artificial neural networks. For example, SNNs struggle to effectively handle the rhythmic and periodic patterns found in natural language processing and time series analysis. In response, researchers have developed a new approach to SNNs called CPG-EPwhich combines two techniques:
- Central pattern generators (CPGs): Neural networks in the brainstem and spinal cord that autonomously generate rhythmic patterns, controlling functions such as movement, breathing, and chewing
- Positional coding (PC): A process that helps artificial neural networks discern the order and relative positions of elements within a sequence.
By integrating these two techniques, CPG-PE helps SNNs discern the position and timing of signals, thereby improving their ability to process temporal information. This process is illustrated in Figure 4.
We evaluated CPG-PE using four real-world datasets: two covering traffic patterns, and one each for electricity consumption and solar energy. The results demonstrate that SNNs using this method significantly outperform those without positional encoding (PE), shown in Table 1. Moreover, CPG-PE can be easily integrated into any SNN designed for sequence processing, making it adaptable to a wide range of neuromorphic chips and SNN hardware.
Ongoing research on AI for greater capacity, efficiency and sustainability
The innovations presented in this blog demonstrate the potential to create AI that is not only more powerful, but also more effective. Moving forward, we are excited to deepen our collaborations and continue to apply insights from neuroscience to AI research, continuing our commitment to exploring ways to develop more sustainable technologies.