Title Advance Computing

Dr. Satish Chand
Professor, Deptt. of Computer Engineering,
Netaji Subhas Institute of Technology, New Delhi, India
Email: schand20@gmail.com

Dr Satish Chand is working as Professor in Computer Engineering Department, Netaji Subhas Institute of Technology, Delhi. He did his M.Sc. in Mathematics from Indian Institute of Technology Kanpur, M.Tech. in Computer Science from Indian Institute of Technology Kharagpur, and Ph.D. in Computer Science from Jawaharlal Nehru University, Delhi. He works in different areas such as Image processing, Video processing, Digital watermarking, Wavelet applications, and Sensor networks. He has published about fifty papers in international journals, international and national conferences. He has been reviewing papers time-to-time for several international journals such as IEEE Transactions on Multimedia, IEEE Transactions on Broadcasting, IEEE Transactions on Circuits and systems for Video Technology, Communication Networks (Elsevier), Journal of Network and Computer application (Elsevier), Multimedia Systems (Springer), Journal of the Network and Systems Management (Springer), Technical Journal: Computer Engineering, Institute of Engineers (India) and many others. He has delivered talks at different places including Computer Science & Engg Department, IIT Kanpur and Electrical Engg, IISc Bangalore.


Abstract
The term ‘Advanced computing’ is a general term that may refer to different things for different persons. For example, it may describe a specific type of high-end computer and the processes undertaken on it, such as supercomputers to carry out massive assignments. It may refer to a set of skills in terms of programs that are useful on relative low speed computers. In this talk, we will discuss Advance Computing in terms of supercomputers and relatively low speed computers, i.e. how the relatively low-speed computers can be used to solve the massive problems, which are even difficult to solve using the supercomputers.

Consider, for example, the problem NUG30 problem, which was proposed in 1968 by Nugent et al. as a test of the computer capabilities, as stated below.

There are a set of n facilities and a set of n locations. For each pair of locations, a distance is specified and for each pair of facilities a weight or flow is specified (e.g., the amount of supplies transported between the two facilities). The problem is to assign all facilities to different locations with the goal of minimizing the sum of the distances multiplied by the corresponding flows.

The availability of powerful, easily programmable, low-cost computing platform has tremendous implications for the solution of complex optimization problems and other complex problems.

The complexity of a QAP with 30 locations is really hard. If one could check a trillion per second, this process will take more than 100 times the age of the universe. For solving this problem, a state-of-the-art algorithm was designed that reduced the number of assignments to a manageable level by repeatedly eliminating possibilities that could not lead to an optimal assignment. The remaining possibilities were explored by using large collection of computers and other resources such as storage, I/O devices, distributed geographically distributed, networked systems. Computers were accessed via a high-throughput computing system known as Condor, developed at the University of Wisconsin. The algorithms were implemented using the Master-slave distributed-processing interface to Condor. The Globus toolkit was also used to obtain some of the computational resources. The approach followed in this project, called metaNEOS platform, in nothing but Grid Computing. There are other types of computing such cluster computing, mobile and pervasive computing.