Objective and organization
-
Objective: To practice the different courses taught within the CPS2 track of the Master in Computer Science in the context of a wide project
-
Project in group of two students
-
Projects will be chosen in the list of proposals presented below.
Important
|
No choice of project out of this list. |
Evaluation modalities
Evaluation is based on:
-
reports produced at each project milestone
-
oral presentations done at each project milestone
-
realization
Caution
|
Even if the project is realized in group, the evaluation is done on an individual basis |
Agenda
-
September, 30 2019 6PM Selection of project (see Milestone M0)
-
October, 07 2019 3PM-7PM EF Room 224 Domain and Technical Scope (see Milestone M1)
-
November, 18 2019 3PM-7PM EF Room 224 Intermediate presentation
-
December, 09 2019 3PM-7PM EF Room 224 Design Model (see Milestone M2)
-
January, 20 2020 3PM-7PM EF Room 224 Proof of Concept (see Milestone M3)
Milestones
M0. Selection of project
-
When: September, 30 2019 6PM
-
Sending of email at Olivier dot Boissier AT emse dot fr with the names of the group members and the chosen project
Caution
|
Each group has a different project to realize. Mandatory to coordinate and reach an agreement among the students of CPS2 track before sending the list of chosen projects. |
M1. Definition of the Project Domain and Technical Scope
-
When: October, 07 2019 3PM-7PM
-
Where: EF Room 224
-
Aim: Presentation and discussion about the project definition and validation
-
Expected results: written technical report and oral presentation
Important
|
The report must be sent by email to the jury the day before the oral presentation (no printed version). An updated version of the report must be sent by email at most one week after the oral presentation. This version will include answers and improvements discussed during the oral presentation. |
The Domain and Technical Scope report should provide:
-
Title of the Project,
-
Abstract description
-
Objectives and expected results
-
Envisioned use cases
-
Functional and Technical Requirements
-
Planning of realization
The oral presentation should satisfy the following constraints:
-
20 min. of presentation of the content of the technical report
-
10 min. of discussion
M2. Delivery of the Design Model
-
When: December, 09 2019 3PM-7PM
-
Where: EF Room 224
-
Aim: Validation and discussion about the design model and decisions of the project
-
Expected results: written report with eventually first demonstrators and oral presentation
Important
|
The technical report must be sent by email to the jury the day before the oral presentation (no printed version). An updated version of the report must be sent by email at most one week after the oral presentation. This version will include answers and improvements discussed during the oral presentation. |
The Design Model report should provide:
-
Updated version of the Domain and Technical Scope report (if changed)
-
Software and Hardware Architecture specification
-
Planning of development with the main milestones
The oral presentation should satisfy the following constraints:
-
20 min. of presentation
-
10 min. of discussion
M3. Delivery of the Proof of Concept
-
When: January, 20 2020 3PM-7PM
-
Where: EF Room 224
-
Aim: evaluation of the realised project
-
Expected results: written report and oral presentation
Important
|
The report must be sent by email to the jury the day before the oral presentation (no printed version). An updated version of the report must be sent by email at most one week after the oral presentation. This version will include answers and improvements discussed during the oral presentation. |
The Proof of Concept report should provide:
-
Updated version of the Domain and Technical Scope report (if changed)
-
Updated version of the Design Model report (if changed)
-
Report describing the implementation
-
Report on the time elapsed based on the planning presented in previous examination session
The oral presentation should satisfy the following constraints:
-
30 min. of presentation and demonstration
-
10 min. of discussion
Project Proposals
All the projects deal with:
-
definition and selection of data acquisition sensor(s) and/or actuator(s),
-
storage and representation of data in RDF Format,
-
processing and reasoning on these data,
-
data visualisation.
In order to process and represent data having in mind interoperability and reasoning, you will have to base your work on the following ontologies and standards (non limited list):
-
Web of Thing Description https://www.w3.org/TR/wot-thing-description/
-
SSN ontology https://www.w3.org/TR/vocab-ssn/
-
BOT ontology https://w3c-lbd-cg.github.io/bot/
-
GeoSPARQL standard for geo-spatial reasoning http://www.opengeospatial.org/standards/geosparql
-
UCUM Datatypes for units in measures http://w3id.org/lindt/custom_datatypes#ucum
S1. Definition and realisation of a Turtlebot Testbed
The aim is to structure a physical environment (grid) on which robots (TurtleBot
) can move.
Each cell of the grid will be equipped with Arduino
or Raspberry Pi
equipped of a set of leds, and a presence sensor. They will b`e connected to the network so that they can receive or send data.
Depending on the color of the led, the cell will be interpreted as an obstacle, a free cell, a product,… The leds will be configured by an external application. The presence information sensed by a cell will be sent to the application.
Sensors and data will be described using Semantic Web ontologies and technologies.
S2. Android application for configuring and controlling a Turtlebot Testbed
The aim is to develop a software application to configure the TurtleBot Testbed. This testbed is a physical environment (grid) on which robots (TurtleBot) can move. Each cell of the grid structuring the environment is augmented by a set of leds, a presence sensor. They are connected to the network. The color of the led allows to configure the cell as being occupied by an obstacle, a product, as being free,…
The Android application will allow to remotely configure this environment, to retrieve also the presence information and the information sent by the robots moving in the testbed.
Sensors and data will be described using Semantic Web ontologies and technologies.
S3. Enriching Turtlebots with physical sensors
Definition and selection of a set of sensors to acquire environmental data from a mobile autonomous robot (TurtleBot) moving within a room. Since the Turtlebot is moving, representation and processing of these data will have to take into account the geolocalized position of the robot. These data will be sent to an android application monitoring the different robots moving in the room. Let’s have also in mind that the Robot is itself a source of data (planning, state, …) that can be sent.
S4. Air quality data acquisition and processing in a city
-
deployment of air quality sensors in the Espace Fauriel Neighbourhood
-
processing and storage of the data on InfluxDB
-
visualisation of a realtime 3D map of the evolution of the air quality in the place where the sensors have been deployed using a 3D Visualisation Library
-
Web site displaying this map
Depending on the progress:
-
replace the wifi connection with a LORA one,
-
include other data in the produced map such as: traffic, thermal power stations, …, data on weather or wind forecast.
The results of this project will be used to configure and deploy sensors of this type in a smart city.
S5. Mobile agents
A mobile agent is an autonomous software program capable of migrating from one machine to another in a network. In the transportation domain, these mobile agents can hop from vehicles to road infrastructures (and reciprocally) to keep the transport management well informed about the traffic rules violation as well as the traffic conditions of the route traversed by the vehicle or other applications.
The objective is to deploy mobile agents on a fleet of robots to achieve a mission. The mobile agent will execute the following basic steps:
-
Migrate on Robot
-
Execute the assigned tasks
-
Process the raw data on fly and collect the processed data
-
Come back with the processed data
The students will install a mobile agent platform like Aglet
on `Raspberry Pi present on the Robots.
Then they will test the mobile agent’s basic steps by making them migrate from one robot to another.
Finally the students will realize a coordinated data collection
In this last use case, data (coming from sensors on the robots) will be collected by multiple robots in a given area. The mobile agent will jump from one robot to another to coordinate the data collection task among the robots. The mobile agent will delete the redundant data and will make the robots collect the remaining data (for example a portion of a room where no robot collected the data till now). The mobile agent will thus aggregate the data on the fly from multiple robots. We will compare the communication efficiency in this approach to a centralised data collection approach.
Groups
Topic | Group |
---|---|
Project 1 |
Walid Ouchtiti, Abdelhamid Alaoui, Anass Elghaoui |
Project 2 |
Malshani Rancha Godage, Jiawei Xu |
Project 4 |
Ali Haidar, Ahmad Alibrahim |
Project 5 |
Redwan Bendraoua, Axel Roc |