ISCAS tutorial on Material and Physical Reservoir Computing for Beyond-CMOS Electronics

Christof Teuscher will be teaching a tutorial on “Material and Physical Reservoir Computing for Beyond-CMOS Electronics” at the IEEE International Symposium on Circuits and Systems (ISCAS).

ISCAS is the flagship conference of the IEEE Circuits and Systems (CAS) Society and the world’s premiere forum for researchers in the active fields of theory, design and implementation of circuits and systems.

Tutorial abstract:

Traditional computing is based on an engineering approach that imposes logical states and a computational model upon a physical substrate. Physical or material computing, on the other hand, harnesses and exploits the inherent, naturally-occurring properties of a physical substrate to perform a computation. To do so, reservoir computing is often used as a computing paradigm. In this tutorial, you will learn what reservoir computing is and how to use it for computing with emerging devices and fabrics. You will also learn about the current state-of-the-art and what opportunities and challenges for future research exist. The tutorial is relevant for anybody interested in beyond-CMOS and beyond-von-Neumann architectures, ML, AI, neuromorphic systems, and computing with novel devices and circuits.

More info about ISCAS tutorials at https://iscas2023.org/tutorials

 

 

Nesara wins National Aspirations in Computing Award

tlab intern Nesara Shree won a National Aspirations in Computing Award from the National Center for Women in Computing (NCWIT).

Each year, U.S. high school students in grades 9 through 12 who are women, genderqueer, or non-binary are eligible to receive recognition for their aptitude and aspirations in technology and computing, as demonstrated by their computing experience, computing-related activities, leadership experience, tenacity in the face of barriers to access, and plans for post-secondary education. This year, 40 winners and 360 honorable mentions were selected from more than 3,300 amazing, talented young applicants.

Samyak named top 300 Scholar in the Regeneron Science Talent Search (STS)

The Society for Science announced that tlab intern Samyak Shrimali has been named a top 300 Scholar in the 82nd Regeneron Science Talent Search (STS) — the nation’s oldest and most prestigious science and mathematics competition for high school seniors.


Regeneron STS recognizes and empowers the most promising young scientists in the U.S. who are creating the ideas and solutions that solve our most urgent challenges. A listing of all 300 Scholars can be found here ; a total of 1,949 students around the country entered the competition this year.  Each scholar will receive $2,000, and their schools will also receive $2,000 to use toward STEM-related activities.

NEW PAPER: Multi-tasking Memcapacitive Networks

D. Tran and C. Teuscher, Multi-tasking Memcapacitive Networks, in IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2023. doi: 10.1109/JETCAS.2023.3235242.

Abstract:

Recent studies have shown that networks of memcapacitive devices provide an ideal computing platform of low power consumption for reservoir computing systems. Random, crossbar, or small-world power-law (SWPL) structures are common topologies for reservoir substrates to compute single tasks. However, neurological studies have shown that the interconnections of cortical brain regions associated with different functions form a rich-club structure. This structure allows human brains to perform multiple activities simultaneously. So far, memcapacitive reservoirs can perform only single tasks. Here, we propose, for the first time, cluster networks functioning as memcapacitive reservoirs to perform multiple tasks simultaneously. Our results illustrate that cluster networks surpassed crossbar and SWPL networks by factors of 4.1×, 5.2×, and 1.7× on three tasks: Isolated Spoken Digits, MNIST, and CIFAR-10. Compared to single-task networks in our previous and published results, multitasking cluster networks could accomplish similar accuracies of 86%, 94.4%, and 27.9% for MNIST, Isolated Spoken Digits, and CIFAR-10. Our extended simulations reveal that both the input signal amplitudes and the inter-cluster connections contribute to the accuracy of cluster networks. Selecting optimal values for signal amplitudes and inter-cluster links is key to obtaining high classification accuracy and low power consumption. Our results illustrate the promise of memcapacitive brain-inspired cluster networks and their capability to solve multiple tasks simultaneously. Such novel computing architectures have the potential to make edge applications more efficient and allow systems that cannot be reconfigured to solve multiple tasks.