HKUST COMP 4461 Lab materials for how to program Pepper (and Nao) with Choregraphe and Python SDK
Venue: Room 4214, Teaching Lab 2, Academic Building.
Date: Nov. 7, 2018
In this lab, we will introduce how to program with Pepper and Nao robot by Choregraphe and python SDK. In this repository, there are demos created by Choregraphe or Naoqi python (2.7) SDK for you to get started.
- Overview
- Preparation for Programming
- Choregraph Example
- Python Example
- More resources
- Useful tips
- Other issues
- Contribute
- Meta
Pepper is a humanoid robot by Aldebaran Robotics and SoftBank designed with the ability to read emotions. Check the following links for more description of Pepper Robot:
Nao (pronounced now) is an autonomous, programmable humanoid robot developed by Aldebaran Robotics, a French robotics company headquartered in Paris, which was acquired by SoftBank Group in 2015 and rebranded as SoftBank Robotics. Check the following links for more description of Nao Robot:
Pepper (or Nao) robot can detect sound, people, obstacles based on its perception algorithms. The OS is a modified Gentoo distribution called OpenNAO with NAOqi OS on top of it. The robotics OS runs all the software from hardware control to AI decision making.
The SoftBank provides serveral ways to program the Pepper or Nao robots, check it here.
Choregraphe is a multi-platform development application with graphical user interface. It allows you to create animations and behaviors on Pepper, test them on a simulated robot or directly on a real one, monitor and control Pepper and Nao, etc.
For Choregraphe, refer to the official installation guide.
For Python SDK, refer to the official installation guide.
You need to sign up an account to download the software. During the installation, you may need to provide following license key:
654e-4564-153c-6518-2f44-7562-206e-4c60-5f47-5f45
Before starting the application, we need to connect your computer and the robot to the same network 'PepperRobot' (the password will be announced during the class). Then the initial interface of the application looks like below:
For each part of the interface, i.e., toolbar and other panels, please refer to the detailed explanation from the official website. There is also a chart which describes each button in details.
After you get the Demo
folder, please open the .pml
file with Choregraphe, and you will see a project created by the same application previously. This project is created for the demonstration of Engineering School on Information Day, 2017. We will explain this demo in details:
We design the workflow as below:
For each component, the detailed design is as below:
No. | Speech | Screen | Gesture | Sound | Remarks |
---|---|---|---|---|---|
1 | Hi! Welcome to the Engineering Commons. | Embrace_change | |||
2.1 | Do you want to see me dance? | Embrace_change | |||
2.2 | Embrace_change | Lift forearm to 90 degrees | If yes , go to 2.3, otherwise go to 3.1 |
||
2.3 | Embrace_change | Disco dance | Saxophone music | ||
3.1 | Do you want to know more about today's Engineering admission events? | Embrace_change | |||
3.2 | Embrace_change | Lift forearm to 90 degrees | If yes , go to 3.3, otherwise go to 4.1 |
||
3.3 | Scan me to download the app! Engineering related talks will be held in Lecture Theatre J... K... and H. Besides, Consultation sessions will be provided for both DSE and non-DSE applicants. | Info Day app QR code | Related gestures | ||
4.1 | Do you want to see something cool? | Info Day app QR code | |||
4.2 | Info Day app QR code | Lift forearm to 90 degrees | If yes , go to 4.3, otherwise go to 5.1 |
||
4.3 | On campus today, you can watch robots battle against each other. You can even sign up to play! | Robomaster_image | |||
4.4 | Robomaster_image | Gorilla action | Machine gun sound | ||
4.5 | You can see an electric vehicle built by our students | EVcar_image | |||
4.6 | EVcar_image | Driving action | Car sound | ||
4.7 | You can visit the labs to see experiments. | Lab_image | |||
4.8 | You can take pictures with the 3D trick art and explore the world of tomorrow through Augmented Reality. | HKUST.SENG_AR_LiPHY_app | |||
4.9 | HKUST.SENG_AR_LiPHY_app | Camera action | Shutter sound | ||
4.10 | You can scan special lights for information. | HKUST.SENG_AR_LiPHY_app | |||
4.11 | HKUST.SENG_AR_LiPHY_app | Eyes light up | |||
4.12 | For the 3D trick art, scan the HKUST.SENG AR app QR code shown on the left. For special lights, scan the Lie Fly app QR code on the right. | HKUST.SENG_AR_LiPHY_app | Speaking gestures | ||
5.1 | If you need help finding information today, feel free to reach out to our Engineering Student Ambassadors. | ESA_image | |||
5.2 | Enjoy your day at the most beautiful campus in Hong Kong. See you around! | HKUST_image | Related gestures |
Based on the scheme, we can add and change related gestures, screen image or speech text accordingly.
After we finished editing the program, we can upload the program to the Pepper robot while it is in the rest mode. Then we can click the play
button to test the program on Pepper.
In the Demo folder, the additional sound and image files are stored in the html
subfolder, while the main program is in the show.pml
file. You can start your program based on that.
First, you need to set up the python 2.7 environment in your computer. Then, install the NAOqi python SDK following the offical guide. After setting up the environment, you can type in the following code to test your environment setup:
from naoqi import ALProxy
tts = ALProxy("ALTextToSpeech", "<IP of your robot>", 9559)
tts.say("Hello, world!")
If the Pepper speaks Hello, world!
, then you have set up the development environment successfully. We will announce the IP address during the class. If you forget the IP when you experiment, please press pepper’s Chest button, and it will say it.
There is a demo code provided in this repository for you to get aware how to call NAOqi APIs in python. This demo aims to detect human faces and tell their genders and expressions from what has been detected. After changing the IP address of the robot, you can run this code by simply typing
python expression_teller.py
This example showcases how to track people, get people's position, gender, and emotion, and speech recognition, and textToSpeech. There are also plenty of demos offered on the official documentation website.
Thanks for the previous COMP 4461 TA Zhida SUN's tutorials: lab3 and lab4. You can refer to these two tutorials as they have more details about installing the softwares for programming.
You can also search the "Pepper robot" or "Nao robot" in Youtube to see what other people program the robots for.
-
Turn on or Turn off the robots: long press the breast button.
-
Turn the robot in the rest mode or wake it up: double press the breast button.
-
Get the robot IP: short press the breast button.
-
Sometimes it needs a long time to turn on the robots, wait patiently or ask the TA for help.
-
When they are encharged, the motion functions (e.g., move forward) are disabled by default.
-
The red or yellow LEDs lights indicates errors or warnings, try to softly fix them or ask the TA for help.
-
Your computer should connect to the same WiFi as the robots.
-
There are some differences between Pepper and Nao, the official documents will give a icon of the robot if that part is specific for a certain robot, check it first.
-
The OS of our Pepper robot is 2.5.5, while the OS of our Nao robot is 2.1.4. Please refer to relevant documents (above links) and download corresponding software for programming these two robots, though some 2.5.5 example can also work for NAO.
-
The example here is for Pepper. Nao should be similar. The Nao Example folders contain some example for Nao, but be attention not to make it move, it is too easy to fall down!
-
Take care of the robots! Do not hit them!
-
Do not try to fix the Pepper or Nao robot's arms/hands in the same gesture for a long time, it will break the robots!
-
For Windows, if you want to use python SDK, make sure that you install the python 2.732bit as it needs to match the SDK (32bit). Using conda may help you to settle the environemnt.
-
The capabilities of built-in speech recognition and vision processing packages are limited. Feel free to use your own laptop for record the voice (and recognize the speech using other APIs like Google Speech API) and send the command to control the robot in real time.
We would love you for the contribution to Project 3, check the LICENSE
file for more information.
Zhenhui PENG. Distributed under the MIT license. See LICENSE
for more information.