XOMiCar3 - OMiLAB Robotic Car Experiment 3

Keywords: ADOxx, Meta Modeling, Service-driven Enrichment

Use Case

    The aim of this project is to provide an universal solution for a self-parking mechanism with autonomous parking place discovery. As a prototype vehicle served the Makeblock mBot. The mBot, similar to other, already adopted self-parking assistants by for example BMW and Audi, makes use of sensor devices, to gather information about the environment. 

   The solution on the robot side provided in this project is:
 

  1. The mBot should be able to dicover, where the parking places are located by following the road marking line. When the line stops, this is a sign for the mBot, that it is located between 2 parking places.

    In order to achive that, a Line-Follower Sensor is used.
     
  2. If the mBot finds itself between two parking spots, it should check if either is free by estimating the distance to the nearest object on both sides.

    An Ultrasonic Sensor is utillized for that purpose. 
     
  3. If there is a free place on either side, the mBot should park itself perpendicularly in the free spot.

    The Gyro sensor is used to enable a precise 90 degrees turn.
     
  4. If there is no free parking spot, the robot should go further again by following the marking line until it finds one

 

    The solution on the user side is implemented as a web application (accessible in the "control" tab on the left) and as a desktop application (packaged as a JAR file, that can be found in the following GitLab project, under the target folder: https://gitlab.com/nagy.andrei.mihai/Bachelor-OmiCar-JavaFX.git).

    When it comes to the desktop application sub-project there are two different self-parking use cases.

  1. Self-parking in the first found free spot - that means the robot car searches the first free spot in the parking lot and parks itself there.
  2. Self-parking in user specified spot - that means the user will be able to choose the desired parking spot via the desktop application and the robot car will park itself in that chosen spot.

 

   The desktop application is implemented in JavaFX and looks like this (before authentication on the left and after on the right):

           

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  • For an animated GIF file that shows how the desktop app performs when using the arrows to control the modelled car see: https://1drv.ms/i/s!Al6WBwuF5rO1uGPSQe0NBqkxFvW4.
  • For a video recording illustrating the above mentioned use cases jump to "Results".

Experiment

 The number of input ports, used for mounting the sensors to the robot, was unfortunately restricted, so only four could be installed in the end. After evaluating the different combination of sensors, as optimal variant was selected:

 

  • 2 Ultrasonic Sensors on the sides - that measure the distance from themselves to the nearest object

    Two of them were assembled - one on each side of the mBot. They are responsible for recognizing, if a parking place is free or not.
    If the nearest object finds itself in a distance greater than the standard length of a parking spot (20D or 20cm), then the spor is free and the mBot can approach and park itself there.


    j

     

  • Gyro Sensor - measures the attitude of a moving object, in particular the rotation angles about the X, Y and Z axis in tridimensional space

    The gyro sensor has been used for enabling precise 90 degrees turns when parking perpendicularly after finding a free parking spot. 

    How it works: The initial value of the Z-axis is saved and compared continuously until it has increased/decreased with 90 degrees depending on, if it is a right/left turn.  When the condition is met, the mBot stops.

    How the code looks:




    beginPos - initial Z-axis value, when starting the turn
    endpos - the ideal point when the mBot should stop for a precise 90 degrees curve
    pos - current Z-axis value, used for comparing with the desired endvalue

    *The function is generalized, to avoid hard coding, though it is only used for making 90 degrees turns, so degrees = 90.
    **The condition actually accepts a 4 degrees error, because it is almost impossible to "catch" the moment when the sensor returns exactly +/- 90 degrees from the start value, due to latency.

     

  • Line-follower sensor - consists of two sensors, its purpose is to recognize wheter its sensors are located over a black line on a white background or vice versa and having that knowledge to reliably follow that line. 





    The Line-follower sensor is utilized in this thesis to keep the mBot in the middle of the parking lot by following the black road marking line. Additionally, if the line stops, this is a sign for the robot that it has reached the next line of parking place, so it has to "look around" for a vacant spot.

    How the code looks: 




The whole information and control flow between the Client-Side Logic (the GUI) and the Server-Side Logic (responsible for the whole interaction with the robot) is enabled by HTTP reqests/responses to the published endpoints.
All endpoints are secure (marked with annotation @Secured), meaning that requests from unauthorized users (not registered or without valid token for the current time slot) are denied.
Below follows the REST Interface, available for further reuse:



* drive*/{speed} -  the desired speed, at which the robot should drive in the according direction, is passed as path parameter {speed}.  
** Example: AJAX GET request to the endpoint driveForward with speed = 100. The token is set in the header before sending the request and validated afterwards. 

 

In the JavaFX desktop application the REST endpoints depicted in the image above are called via HTTP Get requests. The class that creates and runs the requests in new threads looks as follows:

Results

Finally, to illustrate the results of this Bachelor thesis, a video documentation is provided. 
The objective was to be able to follow the location of the mBot on the map in the user interface and to compare it with the real life position and movements of the robot, so the screen of the notebook, on which the experiment was executed, is also visible in the scene.

Agenda
1. The mBot is manually driven to the entry point of the parking lot. (by the user)
2. The position is adjusted, so that both line-follower sensors are on the line. (by the user)
3. The Self-parking mechanism is turned on, which means, that, from that point on, it's all up to the mBot to find a free parking spot and park himself perpendicularly. (by the robot)

 

 

Video results for the self-parking process using the JavaFX desktop application:

  • First, the authentication part of the project. Without it, the desktop application cannot be used.
  • Next, the two use cases, each with a successful and failed parking attempt.
    • Use case 1: self-park in the first found free spot. The mBot searches and parks in the first free parking place. If the parking is full it cannot park. The video in the corner is the desktop application. The user presses "P" on the keyboard to start the self-parking process. The desktop application model moves with the mBot to reflect the reality.
    • Use case 2: self-park in user chosen spot. The mBot parks in a user specified spot. The user chooses this spot via the desktop app. That's why in this case the video in the corner starts first. The user moves the car in the desired parking spot and presses "Park here". The mBot then travels to that location and attempts to park. If the spot is free it parks, if not it informs the user about it.?