Vishal Dhupar, Managing Director, South Asia, NVIDIA Graphics and Dr Shyam Diwakar, Assistant Professor and Lab Director, Computational Neuroscience and Neurophysiology Labs, Amrita University, explain the benefits offered by NVIDIA Tesla Accelerated Computing Platform to Sachin Jagdale
What are the features of Accelerated Computing Platform?
Vishal Dhupar (VD): The NVIDIA Tesla Accelerated Computing Platform is the leading platform for accelerating big data analytics and scientific computing. The platform combines the world’s fastest Graphics Processing Unit (GPU) accelerators, the widely used CUDA parallel computing model, and a comprehensive ecosystem of software developers, software vendors, and data centre system OEMs to accelerate discovery and insight.
The Tesla platform includes comprehensive system management tools to simplify GPU administration, monitor health and other metrics, and improve resource utilisation and efficiencies. Many of the HPC industry’s most popular and powerful cluster and infrastructure management tools use NVIDIA system management APIs to support GPUs, including IBM Platform HPC.
Does NVIDIA Tesla provide training support to handle Accelerated Computing Platform? If yes, explain in detail.
VD: Yes, we offer training in two key programming languages – CUDA and OpenACC.
Do you have any tie ups with hospitals? Elaborate more on the same.
VD: No. But we have a tie-up with research organisations in the world including India. We also have tie-ups with over major institutions, including IIT Bombay and Amrita University.
What is the implementation cost of Accelerated Computing Platform?
VD: It varies on the number of nodes in a cluster and the type of applications deployed.
What are the key transormations that NVIDIA has brought to laboratory operations?
Dr Shyam Diwakar (SD): With GPUs, we are able to simulate millions of neurons and how they interact with several millions of synaptic connections when they compute what could be a sensory or motor signal in the brain circuits. We were also able to simulate what happens to these circuits when certain drugs were used to modify or remedy certain behavioural conditions.
Ordinary CPUs could be used for detailed simulations but in case of very large scale simulations of how neurons interact to perform functions, we are finding that GPUs make it more effective time-wise. With GPUs, most of the tools had to be created from scratch because the scale at which these experiments are performed is so unusual. We also guess this set of codes will be released as a library for anyone to simulate brain circuits and their properties under therapeutic or pharmacological conditions.
Besides, students are now interested in these coding strategies to develop their own programming case studies on such technologies. Amrita School of Biotechnology has been teaching GPU programming to its bioinformatics students, functioning also as a GPU teaching and research centre. We also established our GPU supercomputer allowing students and faculty members to solve large mathematical models in lesser time.
Which key discoveries/ achievements have been made using this technology at your lab?
SD: In our research based at Amrita School of Biotechnology’s Computational Neuroscience Lab, we used simple models of neurons to build large-scale models of neural circuits. Such circuits are built by taking experimental data from slices of living brain tissue or directly recording from a human subject or an animal that is alive and anaesthetised and studying its responses to certain inputs like touch, grasp etc. Developing a GPU-based model allowed for more complex algorithms and sophisticated network simulations at much faster computational speeds. GPUs are composed of thousands of connected cores, each is capable of performing a significant amount of computation. So, in a way, GPUs resemble neural networks inside our brains, processing information in parallel. We implement both models of brain circuits in normal condition and how they dysfunction during diseased states. By allowing reconstructions, such modelling allows predictions of disease and normal function and on paradigms to treat them. We published some parts of the work defining how circuit functions and how disease models explain dysfunction in several conferences and book chapters. Some of our major studies are also being prepared, given that our models have been in development less than three years ago.
What is your prediction regarding the future of accelerated computing platform in India?
SD: Amrita University has been instrumental in translating societally relevant problems from labs to villages, be it in modelling neurological disorders, predicting landslides in Western Ghats at our deployment locations, glaciers in Himalayas, smart agriculture and technology-enhanced organic farming, novel biomedical devices for diabetes, voice synthesisers, most problems require crunching big data or running large sets of differential equations. Advanced computing in its varieties and a national supercomputing machinery will essentially drive robotics, weather predictions, models of monsoon changes, farming patterns, crime analysis, news popularity listings, social media diffusions, drug design, bioanalytical designs and testing, pharmaceutical product development, medical decision making, cryptology and cyberphysical interactions and a lot more. Today’s GPUs allow large scale parallel processing. Together with newer and advanced computing architectures, parallelisation will be seen as crucial from food processing, hospital networks, computer games to banking, planning, rehabilitation and even inventory processing in major funeral service industry. While academia will look into hybrid solutions combining supercomputing on a PC to large-scale petaflop clusters, joint industry-academia sectors may introduce changing trends with cloud computing and pervasive ubiquitous computing platforms.
Comments are closed.