Tutorial
Table Of Contents
- 1. Premise
- 2. Raspberry Pi System Installation and Developmen
- 3 Log In to The Raspberry Pi and Install The App
- 4 Assembly and Precautions
- 5 Controlling Robot via WEB App
- 6 Common Problems and Solutions(Q&A)
- 7 Set The Program to Start Automatically
- 8 Remote Operation of Raspberry Pi Via MobaXterm
- 9 How to Control WS2812 RGB LED
- 10 How to Control The Servo
- 11 How to Control DC Motor
- 12 Ultrasonic Module
- 13 Line Tracking
- 14 Make A Police Light or Breathing Light
- 15 Real-Time Video Transmission
- 16 Automatic Obstacle Avoidance
- 17 Why OpenCV Uses Multi-threading to Process Vide
- 18 OpenCV Learn to Use OpenCV
- 19 Using OpenCV to Realize Color Recognition and T
- 20 Machine Line Tracking Based on OpenCV
- 21 Create A WiFi Hotspot on The Raspberry Pi
- 22 Install GUI Dependent Item under Window
- 23 How to Use GUI
- 24 Control The WS2812 LED via GUI
- 25 Real-time Video Transmission Based on OpenCV
- 26 Use OpenCV to Process Video Frames on The PC
- 27 Enable UART
- 28 Control Your AWR with An Android Device
- Conclusion
114
'''
IP = '192.168.3.11'
'''
Then initialize the camera, you can change these parameters according to your needs
'''
camera = picamera.PiCamera()
camera.resolution = (640, 480)
camera.framerate = 20
rawCapture = PiRGBArray(camera, size=(640, 480))
'''
Here we instantiate the zmq object used to send the frame, using the tcp communication protocol, where 5555 is the
port number
The port number can be customized, as long as the port number of the sending end and the receiving end are the same
'''
context = zmq.Context()
footage_socket = context.socket(zmq.PAIR)
footage_socket.connect('tcp://%s:5555'%IP)
print(IP)
'''
Next, loop to collect images from the camera, because we are using a Raspberry Pi camera, so use_video_port is True
'''
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
'''
Since imencode () function needs to pass in numpy array or scalar to encode the image
Here we convert the collected frame to numpy array
'''
frame_image = frame.array
'''
We encode the frame into stream data and save it in the memory buffer
'''
encoded, buffer = cv2.imencode('.jpg', frame_image)
jpg_as_text = base64.b64encode(buffer)
'''
Here we send the stream data in the buffer through base64 encoding to the video receiving end
'''
footage_socket.send(jpg_as_text)