Multi-tier Big Data Pipelines from Edge to the Cloud Data Centers

HiPC 2019, 17 December 2019 – Hyderabad, India

 

 

Keynote Talk

“Decentralised technologies for orchestrated cloud-to-edge intelligence”

Keynote Speaker

Domenico Siracusa

Head of the RiSING Research Unit at FBK CREATE- NET, Italy and Decenter (http://www.decenter-project.eu/) co-ordinator.

More Details can be found here

_______________________________________________________________________________________________________________________________________________________________________________

Workshop Shecudule:

 

Keynote Talk

“Decentralised technologies for orchestrated cloud-to-edge intelligence”

Presenter: Domenico Siracusa 

Time: 2:00-3:30 PM

 

Workshop Paper Presentations

“Intelligent Deep Reinforcement Learning based Resource Allocation in  Fog network”

Presenters: Divya.V, Leena Sri

Time: 3:30 – 4:00 PM

 

Break: 4:00 – 4:30 PM

 

“Wireless Water Quality Monitoring and Quality Deterioration Prediction System”

Presenters: Anala M R

Time: 4:30 – 5:00 PM

“Improving Throughput of BigData Applications”

Presenters: Janardhana Reddy Naredula

Time: 5:00 – 5:30 PM

_______________________________________________________________________________________________________________________________________________________________________________

 

Call for Workshop Paper

Today, huge amount of data is being generated by the Internet of Things (IoT), such as smartphones, sensors, cameras, cars and robots. In order to process the generated data, there exist Big Data platforms ( such as Hadoop and Spark ). Conventionally, they are deployed in centralised Data Centers, which, however, fails short of addressing time-critical requirements of the applications due to high latency between the Edge, where the data are generated and the Data Centers where they are processed. The emerging Edge/Fog computing paradigm promises to solve this problem by seamlessly integrating hardware and software resources across multiple computing tiers, from the Edge to the Data Center/Cloud. Since computing resources at the Edge may be power and capacity constrained, it is necessary to invent new lightweight platforms and techniques that seamlessly interact, sense, execute and produce results with very low latency, while at the same time address other high-level requirements of applications, such as security and privacy.

 

Regarding these problems there are many challenges that must be addressed with the invention of new architectures, methods, algorithms and solutions that:

 

 

  • Integrate and process data from underlying IoT platforms and services
  • Smartly select data streams for processing
  • Address the 4 “V” of the Big Data problem: volume, variety, velocity and veracity
  • Improve the energy efficient management of resources and tasks processing
  • Address the QoS and time-critical aspects of smart applications
  • Facilitate intelligent integration of information arising from various sources
  • Address the requirements of very dynamic Big Data pipelines (e.g. moving smartphones, sensors, cars, robots with dynamically changing requirements for processing)
  • Provide orchestration methods and scheduling policies that address dependability, reliability, availability and other high-level application requirements
  • Adequately address the inherent variability of resources from the Edge to the Data Centers
  • Provide new architectures which use the powerful computing resources of Data Centers, while at the same time providing optimal QoS to applications
  • Address the decentralisation aspects through the use of Blockchain-based Smart Contracts and Oracles
  • Implement distributed Artificial Intelligence methods from the Edge to the Data Center/Cloud

Special Issue of the Software: Practice and Experience journal

Selected best papers will be invited to submit to a Special Issue of the Software: Practice and Experience journal.

 

http://onlinelibrary.wiley.com/store/10.1002/%28ISSN%291097-024X/asset/olbannerleft.jpg?v=1&s=2d7d001211f2c40f177a231141601e9f52afc1f3

 

Aims:

The workshop aims to bring together scientists and practitioners interested in the intricacies of the implementation of large-scale Big Data pipelines. Our intention is to discuss various problems, challenges, new approaches and technologies addressing this hot new area of research. The idea is to shortlist the most challenging problems, to shape future directions for research, foster the exchange of ideas, standards and common requirements. We look for high quality work that addresses various aspects of the investigated problem.

 

Manuscript Guidelines

Submitted manuscripts should be structured as technical papers and may not exceed six (6) single-spaced double-column pages using 10-point size font on 8.5 × 11 inch pages (IEEE conference style), including figures, tables, and references. See IEEE style templates at this page for details.

Electronic submissions must be in the form of a readable PDF file. All manuscripts will be reviewed by the Program Committee and evaluated on originality, relevance of the problem to the conference theme, technical strength, rigor in analysis, quality of results, and organization and clarity of presentation of the paper.

Submitted papers must represent original unpublished research that is not currently under review for any other conference or journal. Papers not following these guidelines will be rejected without review and further action may be taken, including (but not limited to) notifications sent to the heads of the institutions of the authors and sponsors of the conference.

Presentation of an accepted paper at the workshop is a requirement of publication. Any paper that is not presented at the conference will not be included in the proceedings.

 

Important Dates

Paper Submission : September 20th, 2019

Notification to Authors : October 14th, 2019

Workshop camera-ready : October 28th, 2019

 

Submission Portal

Easychair Submission Link:  https://easychair.org/my/conference?conf=bigdatapipelines2019

 

Organizing Committee

Vlado Stankovski, University of Ljubljana, Slovenia

Rajkumar Buyya, CLOUDS Lab, The University of Melbourne, Australia

Shashikant Ilager, CLOUDS Lab, The University of Melbourne, Australia

 

Program Committee Members

Yogesh Barve, Vanderbilt University, USA

Emiliano Casalicchio, Blekinge Institute of Technology and Sapienza University of Rome, Italy

Kyle Chard, University of Chicago and Argonne National Lab, USA

Jānis  Grabis, Riga Technical University, Latvia

Peter Kacsuk, Hungarian Academy of Sciences, Hungary

Dragi Kimovski, University of Klagenfurt, Austria

Marta Patiño-Martínez, Universidad Politécnica de Madrid, Spain

Dana Petcu, West University of Timisoara, Romania

Francisca Pérez, Universidad San Jorge, Spain

Deepak Poola Chandrashekar, IBM, India

Yogesh Simmhan, Indian Institute of Science, India

Heru Suhartanto, Universitas Indonesia, Indonesia

Huaming Wu, Tianjin University, China

Minxian Xu, Shenzhen Institutes of Advanced Technology, China

Aleš Zamuda, University of Maribor, Slovenia

 

Contact details

Vlado Stankovski, Ph.D.

Associate Professor of Computer Science

Distributed and Cloud Computing

University of Ljubljana, Slovenia

Email: vlado.stankovski@fgg.uni-lj.si

Phone: +386 41 200 565

 

Dr. Rajkumar Buyya, Redmond Barry Distinguished Professor

Director, Cloud Computing and Distributed Systems (CLOUDS) Lab

School of Computing and Information Systems

The University of Melbourne

Email: rbuyya@unimelb.edu.au

URL: http://www.buyya.com | http://www.cloudbus.org/~raj

 

Shashikant Ilager, Research Scholar

CLOUDS Lab, School of Computing and Information Systems

The University of Melbourne

Email – silager@student.unimelb.edu.au