Project Documentation
This project is maintained by cliffeby
I am a retired civil engineer studying software development as an avocation. I’ve found the software development “community” unparallelled in support for all levels of users. When you hit a roadblock, video tutorials, blogs, Stack Overflow, etc. provide a wealth of content. For me, a quick search, a ctrl-c/ctrl-v and my problem is solved. No other profession offers more to it colleagues. However getting past the HW or sample app has often been a struggle for me. As I navigate that world with my Duckpins and Roch app, I will document some of the “not so sample” issues and questions that arise.
This GitHub page is one of several blogs on my efforts to get beyond HW. I expect that the document and other blogs will be updated as I learn more about the Azure, VMs, Python and OpenCV. I also post at DEV
Cliff Eby - October 2018
UPDATE December 2018 - A second blog is avalable at Duckpin Phase II. It describes the use of Azure Machine Learning Studio and Jupyter Notebooks for analytics and other improvements to the project.
UPDATE January 2020 - A third blog is avalable at Duckpin Phase III. It is strictly data analysis and analyzes over 10,000 rolls.
As a hobbyist with a life-long interest in what makes things work, I look for projects that are more than demonstrations of “cool” technology. For years, I wanted an Arduino or Raspberry Pi (RPI) but avoided the “technical investment” because I wanted to do more than turn on LEDs. Similarly, with IOT I wanted to stream and store more than local weather data.
The pinsetters at Congressional Country Club were designed well before the first integrated circuit was demonstrated. They are controlled by mechanical relays in a Gold-Ruberg artform. When I bring guests to the Club, the pinsetters are a must stop and I can spend at least 15 minutes watching them perform. (See Appendix D) Each lane headboard shown below has a display of pin numbers, but they are not and were never functional. Could I use a computer to light the numbers? Could I track the ball’s location, angle, and speed and measure the result? Did I find a use for a RPI and IOT in one project? And, could I do it all for less than $100 per lane?
Regular duckpin bowling is popular in the northeastern and mid-Atlantic United States. It is a variation of 10-pin bowling. The balls used in duckpin bowling are 4 3⁄4” to 5” in diameter (which is slightly larger than a softball), weigh 3 lbs, 6-12 oz each, and lack finger holes. They are thus significantly smaller than those used in ten-pin bowling but are slightly larger and heavier than those used in candlepin bowling. The pins, while arranged in a triangular fashion identical to that used in ten-pin bowling, are shorter, smaller, and lighter than their ten-pin equivalents, which makes it more difficult to achieve a strike. For this reason (and like candlepin bowling), the bowler is allowed three rolls per frame (as opposed to the two rolls per frame in ten-pin bowling).
The Sherman automatic pinsetter was developed in 1953 and the company ceased operation in 1973. Existing operators are forced to cannibalize pinsetter parts from the bowling houses that close, often buying the machines and putting them into storage to use for spare parts. The lack of new pinsetters is a significant cause of the decline of duckpin bowling.
There are four clubs in the Washington-Metro area that have duckpin facilities on the premises – Congressional, Chevy Chase, Kenwood, and Columbia. Congressional’ s pinsetters were installed in 1961 and have been maintained by Ken Palmer , its bowling professional, for the past 30 years. His experience and a good inventory of spare parts is the key to its continued reliable operation. CCC does not have an auto-scorekeeper. Prior to 1961, the pins were manually reset by golf caddies. At CCC, duckpin bowling is a winter sport.
In addition to lighting the Lucite numbers, there was a request to indicate the number of balls used during each frame. If the ball can be reliably detected, a seven-segment LED display can be controlled by the RPI to indicate state.
User interest or requirements for the ball-pin interaction data is not known. There is no known Moneyball analysis of duckpins. It is hoped that a university may have interest in the one-of-a-kind dataset. If this data can be captured, JSON or CSV table format stored in the Cloud is likely a good starting point.
Spoiler alert- The RPI can not reliably detect a ball in multiple frames and often misses gutter balls. It can capture and send a video file with multiple ball frames for post-processing. For the ball counter, a laser tripwire was being investigated, but unreliable.
Setting up my image on the RPI takes about four hours. OpenCV, IOTHub, and VSCode are large installs and sometimes need a second try. It’s generally best to minimize memory usage (close other windows and multitask on another computer). Once completed, back it up – another lengthy process – but well worth it. I cracked my SD Card (make sure that you take the card out of its slot before installing or removing the RPI in a case) and a backup would have saved a lot of time.
I try to keep my image up to date using command $ sudo apt-get update && sudo apt-get upgrade -y.
Appendix A contains hints on the image setup and issues that I encountered.
Several online video tutorials show an RPI with a standard 1080p camera module able to achieve multiple (maybe over 40) frames per second video processing throughput. Relays connected to the GPIO pins should be able to switch/light 10 led bulbs using an external 12vdc power supply. Azure and AWS have RPI SDKs. It seems like all the pieces are there, but can it all come together to be more than a “classroom” or “demo” project.
RPIs use a Linux variant, Debian, operating system. As a DOS/Windows guy, it would offer some challenges, but nothing a search couldn’t solve. Most RPI video processing is done with Open Computer Vision (OpenCV) which has Python and C++ SDKs. With no background in either language, I investigated C++ because it was reported to be faster. After a couple of tries, I moved to Python because:
On a RPI, typically two major Python versions are installed. I stuck with Version 3 and at the time of this writing it was Python V 3.4.2 ($ python3 –v). I found Python 2 examples often needed some syntax changes to pass the Python3 interpreter.
The RPI is an amazing piece of hardware for $35, but I prefer to use my desktop for coding and research. When I loaded Python to my desktop, I installed Python version 3.6.2. It installed to Programs\Python\Python36-32 and was added to the path so that the >python
starts the Python3 interpreter. I did not have issues with portability between the two Python3 versions.
Python syntax was a little new (see Wikipedia page and this online book for a good language summary). My first programs were to understand the control/looping syntax and data structures - dictionaries, lists, generators, and tuples. The default IDE on the RPI is IDLE but it is short on features. I tried installations of Webstorm and Visual Studio Code and settled on VSCode despite the previously referenced error on an RPI. Also, VSCode consumes considerably more resources than IDLE, so IDLE was often used when only minor changes were expected.
OpenCV was next and early efforts were to grab a frame from a video stream, analyze it, and save it to a file. Some tutorials offered a video file for experimentation, so I started with my desktop development instance and moved to the RPI piCamera after some experimentation. Recognizing that I would not want to do most of development sitting next to the pinsetter, I used the camera to capture representative video for subsequent development. But this created two code bases, one with video from a file and a second with video streamed from the camera.
Equally difficult for me was version control and keeping everything synched. Discipline with git and GitHub was always lacking but I tried to make it work. With three local repos (desktop, RPI at the lane, and RPI at home) and the GitHub remote, I used Appendix B to keep synched.
Along the way, I added the ability to use Remote Desktop and SSH from my desktop to use the RPI or access the RPI’s SDCard storage. The only drawback is that neither remote process provides direct camera images on the remote desktop. As shown here, OpenCV, using Remote Desktop and the waitKey command to break the loop, will generate an image on the remote.
import cv2
img=cv2.imread('C:/Python/03323_HD.jpg')
cv2.imshow('Window',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Since my final installation was expected to be an RPI without a monitor or input devices, I needed to learn remoting resources and limitations. Argv from the import sys
was one of those concepts. It allows you to pass parameters from a command line execution. Again, lots of online tutorials on this.
Next was the use of GPIO pins on the RPI. I started with the obligatory single blinking led on a breadboard powered by the RPI and quickly moved on connecting the GPIO pins to SainSmart 8 and 4-Channel Relay Modules. Like my learning Python syntax and OpenCV, I created some simple programs to get feedback.
Finally, I turned to IoT to send and store data. I started with AWS and struggled to load the Python instance on the RPI. I can’t recall the installation issue, but once installed I simply could not set the required credentials. Naming conventions in the tutorials seemed inconsistent, even with my background in writing several AWS lambda functions. After many hours, I turned to Azure IoT for Python and it just worked. I had a sample IoT client sending data to Azure and storing it in Blob Storage in less than an hour.
So far, so good. On to the design process.
I made a 4 x 6 x 12” wooden channel to mount the RPI, camera and relay modules (Figure 2). Restrictions were:
Reflectors for the bulbs to light the Lucite numbers were long since destroyed and unavailable. Using a 4” and 2 3/8” hole saw, I created ¼” plywood flanges and cut the tops off 12oz. soda cans for the reflectors. A washer was epoxied to the back of the can to accept the bulb holder.
Two led bulb forms were considered. A five-watt equivalent, LED T10 wedge bulbs and connectors were inexpensive and produced subtle light. The 15-watt equivalent, LED 1156 base, also inexpensive, but was overpowering and washed out the Lucite number. Both bulbs offered various colors if desired. Beware, led bulbs have polarity, and the T10 wedge is interchangeable. No harm if in backwards, it just doesn’t light.
Last, a 12-pin connector was considered for ease in removing the RPI and relays from the 10 led bulbs. Making the 20+ crimps seemed tedious and use of the connector remains on the TODO list.
Duckpin bowling is unique in that the player is allowed three balls in each frame to knock down the 10 pins. Unlike 10-pin bowling where the pinsetter is automatic after each thrown ball, duckpins requires the user to clear any deadwood that may remain on the alley. Clearing deadwood is optional as it is often not needed. Also, manually initiated in duckpins is the reset for all 10 pins.
The headboard that displays the Lucite pin numbers is about six feet from pin #1 and the camera is mounted on its back facing the pins. It is a perfect location to view the deadwood and reset arm, the pinsetter motion, the ball before hitting the pins, and the pins that remain standing.
In general, the software needs to recognize several states, capture results, light the led bulbs, and send IoT messages.
State | Action |
---|---|
Ball has entered the field of view | Save video, capture location and repeat until absent |
Deadwood or reset active | Stop pin and ball capture |
None of the above | Check for pins standing; light led bulbs |
Pins stopped falling | Record pin configuration |
Data package ready | Send IoT data to Cloud |
OpenCV typically uses a mask approach to detect motion or changes between two frames of video. The first frame is subtracted from the second and differences are highlighted. This approach works well for frame by frame video detecting a ball moving toward the pins and both deadwood and reset pinsetter activities.
To detect presence of a specific pin, individual pin pattern matching was attempted but found to offer poor results. Due to varying distance from the camera the pins were different sizes; back pins were obscured; and reflections often created false positives. The pin tops were tried to eliminate the size, reflection, and obscurity issues, but matching was inconsistent.
Best matches were obtained when a red filter was applied to the pin tops. If the red band was detected within the cropped image top, the pin was standing. Efficiency of both motion detection and pin presence is improved if the image is limited in size. Pin frames and arm frames are “cropped” to improve speed and edge conditions.
Since pins are either up or down, the 10-pin configuration was a value between 0 and 1023 (10 exp 2 = 1024). Pin 1 (index [0]) has an up value of 512, Pin 2 (index [1]) an up value of 256… and Pin 10 (index [9]) an up value of 1. The pin configuration number is simply the sum of the ten values or the binary string ranging from b1111111111 equal to 1023 and b0000000000 = 0.
There are several triggers that can be used to recognize a changed state. Since it is hoped that the camera can capture at least one frame as the ball moves through the pins and pins often fall seconds after the ball has passed through the pins, a completed pin configuration state must be recognized. A bowler’s deadwood or reset action creates this completion notice, but if reset or deadwood is not needed, the subsequent ball’s presence or timers could create a completed status.
The change in pin count is the primary trigger used for changing the state of the led bulbs and for sending data via IoT to blob storage. A 1.5 second delay timer is used to capture the before and after state of the pins.
V2 of the piCamera module has seven default resolution/framerate modes and specific framerates and resolutions can be requested. Early on, I found some sample code for motion detection which used a 1440 x 912 resolution. This resolution seemed to work well in capturing details of the ball, pins, and pinsetter. Unfortunately, the piCamera at this resolution is not capable of reliably recognizing the ball as it approaches the pins.
No | Resolution | Aspect Ratio | Framerate | Video | Image | FoV | Binning |
---|---|---|---|---|---|---|---|
1 | 1920x1080 | 16:9 | 0.1-30fps | x | Partial | None | |
2 | 3280x2464 | 4:3 | 0.1-15fps | x | x | Full | None |
3 | 3280x2464 | 4:3 | 0.1-15fps | x | x | Full | None |
4 | 1640x1232 | 4:3 | 0.1-40fps | x | Full | 2x2 | |
5 | 1640x922 | 16:9 | 0.1-40fps | x | Full | 2x2 | |
6 | 1280x720 | 16:9 | 40-90fps | x | Partial | 2x2 | |
7 | 640x480 | 4:3 | 40-90fps | x | Partial | 2x2 |
Use of lower resolution, threading, and buffering with post-processing were tried. Also, a laser tripwire was tried to count the number of balls thrown. Several insights were obtained from this exploration.
frame in camera.capture_continuous
command, the frame is destroyed.My initial exploration of Python on an RPI showed the value of functions and the need for configuration settings. Early efforts focused on:
Functions that I used often were:
A deadwood cycle starts by lifting the standing pins, sweeping an arm to clear the deadwood, and replacing the standing pins. The reset cycle sweeps an arm to clear all pins and then places a new set of 10 pins.
This function can be called on any change in pin configuration. Initially, the function is sending video files of any change to any 10-pin start of a frame. A 2M video file, captures about two seconds of activity. The Python IoT SDK contains samples with helper functions. These helper functions are needed and were refactored and Import-ed.
When a strike is thrown, flash the LED bulb and seven-segment counter multiple times.
In production, the RPI is headless and needs to auto start/boot the python program. There are several ways to do this but after reading Five Ways To Run a Program On Your Raspberry Pi At Startup , I chose to use systemd files. If you use absolute paths to locate your files, the technique works well.
Here’s the startup file that I placed in the /lib/systemd/system folder.
#Startup
[Unit]
Description=My Service to start Duckpins
After=multi-user.target
[Service]
Type=idle
User=pi
ExecStart=/usr/bin/python3 /home/pi/Shared/Duckpin2/DPBoot.py
[Install]
WantedBy=multi-user.target
The commands:
sudo systemctl daemon-reload
sudo systemctl enable sample.service
sudo systemctl start sample.service
sudo systemctl status sample.service
sudo systemctl stop sample.service
and ps aux
provide the tools to debug startup issues. Make sure that you use the command to stop the sample.service before running a python program that uses camera or other resources used by the sample.service.
Several blogs referenced a limited life of SD cards that are in a write, read, delete and repeat loop. Extending the life of the SD card shows how to use ram storage for these temporary files.
#!/bin/bash
sudo mkdir -p /ram
sudo mount -t tmpfs -o size=100m tmpfs /ram
Add this line to /etc/fstab. It mounts a folder to RAM, where 777 specifies file permissions
tmpfs /dp/log tmpfs defaults,noatime,nosuid,mode=0777,size=100m 0 0
Programs started by systemd do not have a console for printing. Python’s import logging
is a fully developed logging system for recording performance information. Logging remains a TODO item. Concerns are the affect of IO operations on the frame capture performance and where to store the logs (SD, RAM, or IoT).
This repo contains all the code that I used while learning Python and understanding video frame processing. The key “production” files are DPBoot.py and its imports and blobtoCount.py. The first is the file that boots via systemd on RPI startup (note the use of imports
to keep the code length reasonable and that these file imports
must be in the same folder as the DPBoot.py file.)
The second file is the post processing file that I run when blobs are present in Azure Blob Storage. I had hoped to use an Azure function for this processing, but have yet to find the needed OpenCV functions in Azure functions. Also, I’m am not aware of an cheap vm process that I can schedule to run daily. At present, I run it nightly on my desktop.
I was unable to get framerates high enough to capture two clear observations of a fast-moving ball. I concluded that ball capture may be best handled as a deferred process. Since repeated overwriting of video files in particular could damage the SD Card, I opted to send the video files from a RAM disk file via IoT to Blob storage for nightly processing. I’d like to use Azure functions for this processing, but I haven’t found a simple OpenCV installation for a function. A VM or old desktop were next.
If the piCamera was moved, calibration of cropped areas was a challenge. Seems like an AI solution could auto correct the position, but it is outside the initial scope.
I expected that JSON stored in blob storage could be easily downloaded and analyzed by PowerBI or Power Query. I didn’t find either to be straight forward.
Except for ball capture and counting, the project worked as expected. Up pins are reliable detected, and pin patterns are quickly displayed. If a pin pattern changes, 2M (about two seconds) of video are sent via IoT to blob storage. Post processing generally produces two to five frames of ball video and centroid calculations are repeatable.
The images below show the contours of the ball detected as it moves toward the pins. The fifth image shows the centroids of multiple balls connected by a line. The video that produced these contours can be viewed by clicking the image below.
The text below is output from the post processing effort this single video. The centroid of the largest contour in each frame is captured as an xy pair. This JSON formatted data is entered in an Azure table.
>Successfully inserted the new entity into table - C:/DownloadsDP/Lane4Free\dp _1023_0_.h264 pindata {'PartitionKey': 'Lane 4', 'RowKey': '20180927643118', 'beginingPinCount': 1023, 'endingPinCount': 0, 'x0': '634', 'y0': '829', 'x1': '637', 'y1': '702', >'x2': '641', 'y2': '596', 'x3': '642', 'y3': '510', 'x4': '576', 'y4': '306'}
Blob storage content can be viewed and downloaded directly in the Azure Portal; Table data cannot. I find that Azure Storage Explorer is the best tool for viewing, editing or downloading bolbs and tables.
An Excel spreadsheet of 600+ processed videos can be found at Excel. It is a very simple tool to sort and filter the data for initial understanding of what may be possible with the data. I will provide access to the table data for any interested party.
Deploying this to eight lanes will challenge my current knowledge of dev-ops. Since the camera will have a slightly different location on each lane, the crop ranges for the pins, ball, arm and setter will vary in each lane. Since I want one code base, I plan to pass an argument at startup to specify the lane. I also want updates to be automated and not require me to push software changes to each lane. Can and should I schedule weekly operating systems updates?
What level of hardware maintenance will be required? The number of IoT devices is increasing very rapidly. I expect that in the very near future, we will be surrounded by hundreds at all times during the day and night. But, I doubt that any will be Duckpin capable and will need extensive customization. There may be interest among a younger student or developer, but hardware and software maintenance is a big consideration if expanded beyond the prototype. UPDATE - After two plus years of operation and the caputure of more than 10,000 rolls, reliability has been remarkably good. I had two jumper wire female-to-male connections fail, likely due to use of low-quality jumpers. LED bubls have also failed gradually (The five LED panels on each bulb fail gradually reducing the emitted light.) Long-lasting LEDs, seems to be a myth.
Research:
https://gregtinkers.wordpress.com/2016/03/25/car-speed-detector/
https://www.pyimagesearch.com/start-here-learn-computer-vision-opencv/
https://azure.microsoft.com/en-us/blog/how-to-use-azure-functions-with-iot-hub-message-routing/
http://www.nightbluefruit.com/blog/2013/02/how-to-use-git-to-maintain-code-between-2-computers/
Scheduling:
https://www.raspberrypi.org/documentation/linux/usage/cron.md - sudo apt-get install gnome-schedule
a. Most SDCards purchased with a RPI come with Raspbian installed. Suggest that you update it first using the apt-get command above
b. To install Remote Desktop - $ sudo apt-get install xrdp
c. To get the RPI ip address - $ ifconfig
d. To get RPI ip address when headless. Command prompt from another computer on the local network - ssh pi@raspberrypi.local
e. To map a drive for using Remote Desktop Connection
i. $ sudo apt-get install samba samba-common-bin
ii. edit smb.config per https://www.youtube.com/watch?v=4P5nEH9zGDI
a. The repo contains the version number. As of this date version 9 is the latest node version for Linux. In the final step, make sure that you get the version intended. To download and install a version of Node.js, use the following:
i. $ curl -sL https://deb.nodesource.com/setup_9.x | sudo -E bash -
ii. $ sudo apt-get install -y nodejs
iii. $ node -v
a. sudo apt-get install build-essential git cmake pkg-config
b. sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev
c. sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
d. sudo apt-get install libxvidcore-dev libx264-dev
e. sudo apt-get install libgtk2.0-dev
f. sudo apt-get install libatlas-base-dev gfortran
g. cd ~
h. git clone https://github.com/Itseez/opencv.git
i. cd opencv
j. git checkout 3.1.0
k. cd ~
l. git clone https://github.com/Itseez/opencv_contrib.git
m. cd opencv_contrib
n. git checkout 3.1.0
o. sudo apt-get install python3-dev
p. wget https://bootstrap.pypa.io/get-pip.py
q. sudo python3 get-pip.py
r. pip install numpy
s. cd ~/opencv
t. mkdir build
u. cd build
v. cmake -D CMAKE_BUILD_TYPE=RELEASE \
w. -D CMAKE_INSTALL_PREFIX=/usr/local \
x. -D INSTALL_C_EXAMPLES=OFF \
y. -D INSTALL_PYTHON_EXAMPLES=ON \
z. -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
aa. -D BUILD_EXAMPLES=ON ..
bb. make -j4
cc. sudo make install
dd. sudo ldconfig
a. curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
b. echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
c. sudo apt-get update && sudo apt-get install yarn
a. git clone https://github.com/Azure-Samples/iot-hub-python-raspberrypi-client-app.git
b. cd iot-hub-python-raspberrypi-client-app
c. nano config.py
i. sudo chmod u+x setup.sh
ii. sudo ./setup.sh
iii. You can also specify the version you want by running sudo ./setup.sh [--python-version|-p] [2.7|3.4|3.5].
d. If you run script without parameter, the script will automatically detect the version of python installed (Search sequence 2.7->3.4->3.5). Make sure your Python version keeps consistent during building and running.
powershell C:/Users/Admin/Anaconda3/python.exe c:/Users/Admin/OneDrive/pyProjects/Duckpin2/blobtoCount.py
pause
<path>\Anaconda3
<path>\Anaconda3\scripts
<path>\Anaconda3\Library\bin
Click on images for video.
Sherman Duckpin Pinsetter at Congressional Country Club
A video on YouTube of the Sherman Pinsetter.
I was fortunate to get a backroom tour of the Candlepin Pinsetters at North Star Pizza in Wilmington, VT. Owner Steve Butler told us about the rebuild to solid state after a flood ruined the lanes and relays. Quite different than a duckpin setup.