The program of PDP 2017 is available here.

PDP 2017 Keynote speakers

 

1. Challenges in Computing Accelerators and Heterogeneous Computing

Prof. Didier Elbaz, LAAS/CNRS, Toulouse, France

 

2. Hybrid high performance computing: Cost and Benefit for Scientific Applications

Prof. Vladimir Zaborovsky, Saint Petersburg Polytechnic University, St. Petersburg, Russia

 

3. Metascheduling and Resource Management in Grid and Cloud Computing

Prof. Victor Toporkov, National Research University “MPEI“, Moscow, Russia

 

4. Big Data, Semantic Structures and Causal Decision-making Models

Prof. Vladimir I. Gorodetsky, SPIIRAS, St. Petersburg, Russia

 

 

 

 

Prof. Didier Elbaz

LAAS/CNRS,

Toulouse, France

 

Challenges in Computing Accelerators and Heterogeneous Computing

 

Abstract. In this talk, we present challenges in computing accelerators like GPUs, Intel Xeon Phi and Intel Knights Landing that are often tricky to program.

We  present also new challenges in heterogeneous computing.

In particular, we detail the optimization of SIMD parallel codes for ambarassingly parallel applications.

We emphasize on efficiency issues, loop parallelism, task parallelism, multithreading and vectorization as well as memory management.

We emphasize also on  investments in terms of human efforts to program codes on parallel platform with computing accelerators. Finally, we display and analyze numerical results obtained with several GPUs like K40 and K80 GPUs, Intel Xeon Phi computing accelerator and clusters for a real world trajectography problem in the aerospace domain, i.e., SIMD Monte Carlo numerical simulation and we report on other experiments in the world.

 

Bio:

Didier El Baz received the Dr. Engineer degree in Electrical Engineering and Computer Science from Institut National des Sciences Appliquees (INSA) Toulouse France in January 1984 and was Visiting Scientist in the Laboratory for Information and Decision Systems, MIT, USA from March 1984 to February 1985.

Dr. El Baz received the “Habilitation a Diriger des Recherches” in Computer Sciences (HDR) of INP Toulouse in 1998.

Dr. El Baz is the Head of the Distributed Computing and Asynchronism (CDA) team at LAAS-CNRS, Toulouse France. Dr. El Baz fields of interest are in parallel and distributed computing, computing accelerators, High Performance Computing with application to numerical simulation and combinatorial optimization.

Dr. Didier El Baz has written forty papers in highly reputed international scientific journals; he has edited 8 books, written 5 Chapters in books and 80 papers in proceedings of reputed international conferences. Dr. El Baz received a NVIDIA Academic Partnership in 2010 and various supports from NVIDIA Corporation in 2014 and 2016.

Dr. El Baz was General Chair of international conference Parallel Distributed and networked-based Processing (PDP) 2008 Toulouse, France and PDP 2009 Weimar, Germany and Chair of IEEE Workshop Parallel Computing and Optimization in conjunction with IEEE IPDPS from 2012 to 2017.

Dr. El Baz was Program Chair of IEEE International Conference on Computer  Science and Engineering (CSE) in 2014, Chengdu China and General Chair of IEEE CSE 2015, Porto, Portugal, Executive Chair of the 15th IEEE International Conference on Scalable Computing and Communications (ScalCom 2015) Beijing  China and General Chair of IEEE ScalCom 2016, Toulouse France, as well as General Chair of the 13th IEEE International Conference on Ubiquitous Intelligence and Computing (UIC 2016) and the 13th IEEE International Conference on Advanced and Trusted Computing (ATC 2016) Toulouse.

Dr El Baz has given Invited Talks at China University of Petroleum Qingdao, China University of Geosciences Beijing, CERIST Algiers Algeria, as well as several University and Institute in France like University of Paris XIII, University of Perpignan, CEA, CNES and CNRS. Dr Didier El Baz has given invited talks at several International Conferences like ILAS Conference Barcelona, CERIST Conference 2016 IIKI 2014 and at several international Workshops.

Dr. El Baz has also given courses on High Performance Computing at several Universities like University of Sciences and Technology Beijing from July 4 to July 7, 2016 and from July 20 to July 24, 2015, University of Sichuan from June 27 to June 30, 2016 and University Houari Boumediene Algiers from April 18 to April 20, 2016.

 

 

 

 

Prof. Vladimir Zaborovsky

 

Saint Petersburg Polytechnic University, St. Petersburg, Russia

 

Hybrid high performance computing: Cost and Benefit for Scientific Applications

 

Abstract. Today, supercomputers play an important role in scientific discovery and are used in almost all fields of applied Sciences. Ability to find effective ways to improve high-performance computing technologies is a key factor in the competitiveness of high-tech industries. However, despite the continued increase in the capabilities of the supercomputer systems, they are still inadequate to meet the needs of priority engineering tasks. The complete digital solutions of complex problems is heterogeneous in nature, determined both by the performance of CPUs, and by bandwidth of RAM. No doubt, the future of high performance computing (HPC) will be dealing with hybrid processor architectures, wide range of interconnections, memory capacity and achievements in the software engineering.

In this talk, we focus to questions regarding the choice of the architecture of the hybrid system, which can concentrate all available computing resources to specific application tasks. Our answers are based on experience in the design and run of hybrid 1 Pflops supercomputer in St. Petersburg Polytechnic University (Polytech). We explain specific feature of proposed hybrid platform that allows to reach computing performance to one task rated at about tens or even hundreds of teraflops. The cost of used the such hybrid system valued in specialized hardware and software solutions as well as competence of staff and users. For example, choice of protocols for parallel interaction like MPI or OpenMP reflects of system architecture, latency level of interconnected network and specific feature of task solver. We show that Politech supercomputer platform (includes cluster of 612 compute nodes with 28 cores and 64Gb of memory and GPGPU accelerators, MPP system, cluster with 3072 cores and global up to 12TB of RAM, hybrid cloud infrastructure and shared 1 PB storage system based on Lustre file system) significantly expands programming model and possibility to use not only specific codes but also widespread packages like MATLAB or ANSYS. All benefits of proposed hybrid platform is a consequence of the ability to decompose many engineering  problems into subset of vector, superscalar and concurrent multithread operations, which can effective executes both on the cache friendly, MPP or SIMD systems which all are the parts of hybrid platform. At the end of the discussion we stress that opportunities to educate and train professionals to use hybrid computing paradigm is an additional benefit of Polytech supercomputer research program.

 

Bio:

Vladimir Zaborovsky’s major contributions include telematics, robotics, information security, and applied supercomputing. Dr. Zaborovsky received his degrees in Computer Science (M.Sc. In 1979 and PhD in 1983) from Polytechnic University of Saint Petersburg, Russia.

In 1993 he joined RUSnet Ltd, a start-up project, as a Senior Manager responsible for R&D. At that time his interest included multimedia protocols, supercomputer applications, and new concept to management processes of concurrent interaction between information and physical systems, which later became known as cyber-physics approach. Between 1993 and 1995 Dr. Zaborovsky performed research in collaboration with Department of Computer Science at KTH, Stockholm. Together with NORDUnet colleagues, he took part in the Russian IP over fiber optic network project which was the first phase of Russian University Network program (RUSNet). In 1996 in collaboration with Alenia Spacio and Saint-Petersburg Robotics and Technical Cybernetics Institute he established an ambitious project – to create a terrestrial and wireless ATM network infrastructure for space robotics applications. For experimental purpose it was used 15 meters long and 7 degrees of freedom space manipulator which has been designed for Russian space shuttle “Buran”.  The remote control of space robot via JAMES network was the world's first public network experiment supporting cyber-physics concept. The results of this Russian-European collaborative project were successfully demonstrated at the International Astronautical Forum at Turin, Italy in October 1997. After 20 years of engineering and research activities, Dr. Zaborovsky received a Dr.Sci.Tech degree and in 1999 became professor at the Telematics Department at Polytechnic University and Deputy Chief Designer at Institute for Robotics and Technical Cybernetics. At that time he developed a new concept of virtual network processor, reconfigurable telematics appliances and delay tolerant transport protocols.  He developed a theoretical explanation for the fractal properties of network traffic based on the formalism of fractional derivatives and p-adic metrics. At that time he focused on research in supercomputer technology for real-time applications including data processing, routing, VoIP, robotics and innovative technologies including WDM, MPLS, and IPv6. In 2000-2001 Dr.Zaborovsky conducted research in the field of high speed and secure network systems which based on stealth firewall appliances. In 2004 Vladimir Zaborovsky headed the Telecommunication Information Centre at the Polytechnic University where he started research in HPC, telematics and nanotechnology applications. The results of his  activity included hybrid cloud computing, protocol design, embedded system, and robotics applications. In 2011 Vladimir Zaborovsky appointed as Director of Polytechnic University Department of information and computing technologies. His research activities are focused on a new generation of computer systems based on heterogeneous multi-core microprocessors and calculation principals that using single-electron transistors. In 2008-2015 in cooperation with the German Aerospace Center (DLR) and Institute of Robotics and Mechatronics Zaborovsky was responsible for science aspects of space experiment “Kontur” on board the International Space Station (ISS). The main objectives of this experiment is to study the protocols for planetary robots which controls from a board  of  manned space station, taking into account the delay in transmitting signals and  multi-purpose goals of operations which provides by group of robots. Today's scientific interests of prof. Zaborovsky associated with the development of network centric  management and the creation of the quantum components of supercomputers in which the fundamental laws of physics and computer science will be merge together to form a new class computing processes.

Currently, prof. Zaborovsky is a director of Computer Science and Technology Institute at Polytechnic University, led the team of scientists who prepares project of 1 Pflops hybrid supercomputer, silicon nano-sandvich quantum register, sensors and THz generators. Zaborovsky is the author of more than 150 scientific papers and 8 patents.

 

 

 

 

Prof. Victor Toporkov

 

National Research University “MPEI“, Moscow, Russia

 

Metascheduling and Resource Management in Grid and Cloud Computing

 

Abstract. We address the problem of efficient computing organization and support in distributed environments with non-dedicated resources including Grid, cloud services and platforms. The so called hard tasks like physical experiment results processing at LHC (CERN) utilize significant distributed computing resources part of which is shared with their owners. This factor even in virtual organizations (VOs) causes the competition for resources utilization between both independent users and global users’ and local job flows of computing resource owners. It thus complicates the problem of providing the required quality of service in scalable computing. The scheduling algorithms and approaches known to date, their combinations and heuristics as a rule do not provide efficient or suboptimal schedules under conditions of heterogeneous distributed environments with a dynamically changing composition of computing nodes. Under such conditions the so called economic models of non-dedicated resources allocation and scheduling in distributed computing turn out to be highly efficient in areas like Grid, cloud computing and multi-agent systems.

The study lies in development of distributed computing models and methods based on job flow management and parallel applications scheduling mechanisms integration. The novelty of the proposed approach consists in the fact that it is supposed to develop the mechanisms of dynamic job flow relocation between the computing nodes domains combined with parallel application scheduling taking into account the preferences of all the VO stakeholders. Taking these aspects into account when implementing scalable job flow scheduling systems allows to increase the quality of service and resource utilization efficiency in distributed computing environments.

 

Bio:

Victor Toporkov received his D.Sc. degree in computer science from Moscow Power Engineering Institute (MPEI), where he conducted research on distributed computing from 1990 to 1996. Prof. Toporkov has been a Head of Laboratory “Advanced Computer Control in Avionics” from 1991 to 1996 (Ministry of Science and Education of Russia). Currently, he is a Head of the Computer Science Department at the National Research University “MPEI”, where he holds Full Professor position. He is Visiting Professor of Wroclaw Technological University (Gold medal reward in IT). His primary research interests focus on resource management and scheduling in Grid and cloud computing. Prof. Toporkov is the author and co-author of about 300 papers in computer and computational sciences.

 

 

 

 

Prof. Vladimir I. Gorodetsky

 

St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences,

St. Petersburg, Russia

 

Big Data, Semantic Structures and Causal Decision-making Models

 

Abstract. Big Data is a mainstream of the modern intelligent information processing. During the last ten years, it attracts an ever-increasing attention from both academic and industry. At the same time, it puts many fundamentally novel and very diverse problems. At its infancy, the prevailing opinion was that almost all the problems can be overcome through developments of novel formats and structures for data representation (NoSQL, Data Vault, for instance), implementation of dedicated software tools for data decomposition and subsequent parallel processing (Hadoop, MapReduce, for example) using high-performance computing systems. However, the Big Data problems found out much deeper. It was discovered, for example, that traditional data processing methods, e.g., statistical ones, have failed in many cases due to error accumulation, spurious correlations and due to other negative computational effects conditioned by formidable data size and dimensionality. Moreover, practice proved that the most of the Big Data processing problems cannot be overcome through modification of the existing approaches, models and algorithms. It was recognized that new approaches, models and algorithms should be developed for the specific problems inspired by Big Data while focusing not only on the algorithm efficiency and scalability but also on their stability.

The talk considers one of the most frequent class of the problems usually solved on the basis of Big Data processing that is leaning of decision-making model if decision space contains a finite number of alternatives. This task statement covers a wide class of practically important applications, and recommender and classification systems are among them. In the talk, a number of algorithms developed for the problems of the aforementioned class will be discussed. These algorithms are focused on the explicit representation and usage of Big Data semantics and emphasized causal decision models. The problem statement assumes that source data are of transactional type and they contains heterogeneous attributes (numerical, ordinal, cardinal, etc. along with textual data), and every particular attribute or even some their subset can be designated as target variables. Accordingly, the objectives of Big Data processing is to design of the predictive models for the indicated target variables with finite domain. It is also assumed that the user possesses a software tool capable to form training data sets (sets of instance of the decision pace) for any subset of target variables. In general, exactly such problem statement is formulated for classification and recommender system design. A novel notion that is the core of the approach is data ontology that is not the same as a domain ontology. Data ontology represents hierarchy of those and only those concepts, for which the data set is the set of instances. Some software tools and lexical data bases can be used to build the data ontology. DBpedia tool together with Wikipedia hierarchy of categories exemplifies such an instrument. The discussed approach realizes step-by-step source data transformation resulting finally in semantically transparent causal decision-making model of optimized dimensionality.

 

Bio:

Vladimir I. Gorodetsky is Professor of Computer Science, Head of Intelligent Systems Laboratory of the St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences.

He received MS degree in mechanics from the Military Air Force Engineering Academy in St. Petersburg (1960) and MS degree in mathematics from Mathematical and Mechanical Department of the St. Petersburg State University (1970); received Ph.D. (1967) and Doctor of Technical Sciences (1973) degrees in Space Vehicle Optimal Control from the Military Air Force Engineering Academy in St. Petersburg.

Main publications (more 300) deal with optimal control system theory and applications, applied statistics, distributed planning and scheduling, pattern recognition, multi-agent systems, data mining, including distributed and p2p data mining, data fusion, multi-agent systems, multi-agent technology and applications, computer network security, context-aware recommender systems, self-organized networks and applications, digital image steganography.

He is a member of IEEE, ISIF (International Society for Information Fusion), IFAAMAS (International Foundation for Autonomous Agents and Multi-agent Systems), Russian and European Associations for Artificial Intelligence.

Current scientific interests are multi-agent systems, multi-agent technology and software tools, p2p agent network, multi-agent applications (intelligent logistics, autonomous collective robotics, b2b networks), data mining and machine learning, distributed and p2p data mining, big data processing, big data ontology, data causal analysis, self-organized networks, recommender systems, mobile image enhancement.

 

 

 

 

PDP 2014      PDP 2015       PDP 2016