
Getting DonkeyCar working on a Mac
I have been playing with a #selfdriving car for a while, and that is super exciting. From a #AI and #ML perspective it is small scale, but allows one to exploit all aspects of the tech stack and also appreciate the limitations of not only the software, but also the hardware.
With this You run a NN on a raspberry pi that uses TensorFlow, and Keras and runs inference on the edge. The pi doesn’t have enough power to train, so you need to do that on a beefier machine and then deploy the model back to run this.
Now, I didn’t have any issues in getting this running on Windows, but to get it on a Mac was a different story. The documentation is there that outlines all the steps, and even if you follow it to the T, it breaks right in the end.
When I tried to create a car, using a createcar command (this essentially creates the buckets, where you would save the training images, and the model, and the configuration of the car when you connect to it from your machine). The actual file paths would probably be different for you but, essentially it is the same thing.
(donkey) AMAC02XN1T9JGH5:donkeycar amit.bahree$ donkey createcar ~/mycar Traceback (most recent call last): File "/anaconda3/envs/donkey/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg/pkg_resources/__init__.py", line 660, in _build_master File "/anaconda3/envs/donkey/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg/pkg_resources/__init__.py", line 968, in require File "/anaconda3/envs/donkey/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg/pkg_resources/__init__.py", line 859, in resolve pkg_resources.ContextualVersionConflict: (imageio 2.4.1 (/anaconda3/envs/donkey/lib/python3.6/site-packages), Requirement.parse('imageio<3.0,>=2.5'), {'moviepy'}) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/anaconda3/envs/donkey/bin/donkey", line 6, in <module> from pkg_resources import load_entry_point File "<frozen importlib._bootstrap>", line 961, in _find_and_load File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 646, in _load_unlocked File "<frozen importlib._bootstrap>", line 616, in _load_backward_compatible File "/anaconda3/envs/donkey/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg/pkg_resources/__init__.py", line 2985, in <module> File "/anaconda3/envs/donkey/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg/pkg_resources/__init__.py", line 2971, in _call_aside File "/anaconda3/envs/donkey/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg/pkg_resources/__init__.py", line 2998, in _initialize_master_working_set File "/anaconda3/envs/donkey/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg/pkg_resources/__init__.py", line 662, in _build_master File "/anaconda3/envs/donkey/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg/pkg_resources/__init__.py", line 675, in _build_from_requirements File "/anaconda3/envs/donkey/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg/pkg_resources/__init__.py", line 854, in resolve pkg_resources.DistributionNotFound: The 'imageio<3.0,>=2.5' distribution was not found and is required by moviepy
The key here to focus is on the last lines on both of those blocks of code – the main thing causing the issue is MoviePy (see highlighted lines above).
MoviePy is a Python library for video editing: cutting, concatenations, title insertions, video compositing (a.k.a. non-linear editing), video processing, and creation of custom effects.
It seems like when you go through the steps – clone the repo, setup anaconda, install tensorflow and get the car configured – there is a mismatch in the MoviePy dependencies which it doesn’t like. The way to fix the issue is outlined below.
Skip MoviePy
MoviePy is something you don’t need to use right away but later when trying to make a movie (using the makemovie command – which allows you to create a movie file from the images in a Tub.); this is not essential. To do this, the easiest way is to remove (or my suggestion it to comment) out the moviepy dependency from the setup.py file.
This should be line 33 in the setup.py file that you will find in the same folder where you cloned the git repo. As an example the updated file is below, where the moviepy dependency is commented out (see highlighted). And once you save this and go about creating the car, it should work. Of course you cannot use the makemovie option later.
from setuptools import setup, find_packages import os with open("README.md", "r") as fh: long_description = fh.read() setup(name='donkeycar', version='2.5.7', description='Self driving library for python.', long_description=long_description, long_description_content_type="text/markdown", url='https://github.com/autorope/donkeycar', download_url='https://github.com/autorope/donkeycar/archive/2.1.5.tar.gz', author='Will Roscoe', author_email='wroscoe@gmail.com', license='MIT', entry_points={ 'console_scripts': [ 'donkey=donkeycar.management.base:execute_from_command_line', ], }, install_requires=['numpy', 'pillow', 'docopt', 'tornado==4.5.3', 'requests', 'h5py', 'python-socketio', 'flask', 'eventlet', #'moviepy', 'pandas', ], extras_require={ 'tf': ['tensorflow>=1.9.0'], 'tf_gpu': ['tensorflow-gpu>=1.9.0'], 'pi': [ 'picamera', 'Adafruit_PCA9685', ], 'dev': [ 'pytest', 'pytest-cov', 'responses' ], 'ci': ['codecov'] }, include_package_data=True, classifiers=[ # How mature is this project? Common values are # 3 - Alpha # 4 - Beta # 5 - Production/Stable 'Development Status :: 3 - Alpha', # Indicate who your project is intended for 'Intended Audience :: Developers', 'Topic :: Scientific/Engineering :: Artificial Intelligence', # Pick your license as you wish (should match "license" above) 'License :: OSI Approved :: MIT License', # Specify the Python versions you support here. In particular, ensure # that you indicate whether you support Python 2, Python 3 or both. 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', ], keywords='selfdriving cars donkeycar diyrobocars', packages=find_packages(exclude=(['tests', 'docs', 'site', 'env'])), )
Once you have saved the setup.py file, you need to run the installation again with the following command and then run the create car command. Both of these are outlined below.
pip install -e . donkey createcar ~/mycar
Once you run these, then you should see the successful installation as shown by the output below. Note – your output might be a little different depending on the conda state of packages
(donkey) AMAC02XN1T9JGH5:donkeycar amit.bahree$ pip install -e . Obtaining file:///Users/amit.bahree/CloudStation/Documents/Code/donkeycar Requirement already satisfied: numpy in /anaconda3/envs/donkey/lib/python3.6/site-packages (from donkeycar==2.5.7) (1.14.5) Requirement already satisfied: pillow in /anaconda3/envs/donkey/lib/python3.6/site-packages (from donkeycar==2.5.7) (4.2.1) Requirement already satisfied: docopt in /anaconda3/envs/donkey/lib/python3.6/site-packages (from donkeycar==2.5.7) (0.6.2) Collecting tornado==4.5.3 (from donkeycar==2.5.7) Requirement already satisfied: requests in /anaconda3/envs/donkey/lib/python3.6/site-packages (from donkeycar==2.5.7) (2.18.4) Requirement already satisfied: h5py in /anaconda3/envs/donkey/lib/python3.6/site-packages (from donkeycar==2.5.7) (2.7.1) Collecting python-socketio (from donkeycar==2.5.7) Using cached https://files.pythonhosted.org/packages/a1/71/118e4b7fb453d7095d6863f4b783dbaa57109af4bc2380300649c8942d61/python_socketio-4.0.0-py2.py3-none-any.whl Collecting flask (from donkeycar==2.5.7) Using cached https://files.pythonhosted.org/packages/7f/e7/08578774ed4536d3242b14dacb4696386634607af824ea997202cd0edb4b/Flask-1.0.2-py2.py3-none-any.whl Collecting eventlet (from donkeycar==2.5.7) Using cached https://files.pythonhosted.org/packages/86/7e/96e1412f96eeb2f2eca9342dcc4d5bc9305880a448b603b0a8e54439b71c/eventlet-0.24.1-py2.py3-none-any.whl Collecting pandas (from donkeycar==2.5.7) Using cached https://files.pythonhosted.org/packages/99/12/bf4c58eea94cea4f91ff931f284146337814fb8546e6eb0b52584446fd52/pandas-0.24.1-cp36-cp36m-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl Requirement already satisfied: olefile in /anaconda3/envs/donkey/lib/python3.6/site-packages (from pillow->donkeycar==2.5.7) (0.44) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /anaconda3/envs/donkey/lib/python3.6/site-packages (from requests->donkeycar==2.5.7) (3.0.4) Requirement already satisfied: certifi>=2017.4.17 in /anaconda3/envs/donkey/lib/python3.6/site-packages (from requests->donkeycar==2.5.7) (2017.7.27.1) Requirement already satisfied: idna<2.7,>=2.5 in /anaconda3/envs/donkey/lib/python3.6/site-packages (from requests->donkeycar==2.5.7) (2.6) Requirement already satisfied: urllib3<1.23,>=1.21.1 in /anaconda3/envs/donkey/lib/python3.6/site-packages (from requests->donkeycar==2.5.7) (1.22) Requirement already satisfied: six in /anaconda3/envs/donkey/lib/python3.6/site-packages (from h5py->donkeycar==2.5.7) (1.10.0) Collecting python-engineio>=3.2.0 (from python-socketio->donkeycar==2.5.7) Using cached https://files.pythonhosted.org/packages/95/91/d083bd7b5d408af53633377dfbf87bf181236c8916d36213388b12eaa999/python_engineio-3.4.3-py2.py3-none-any.whl Collecting click>=5.1 (from flask->donkeycar==2.5.7) Using cached https://files.pythonhosted.org/packages/fa/37/45185cb5abbc30d7257104c434fe0b07e5a195a6847506c074527aa599ec/Click-7.0-py2.py3-none-any.whl Collecting itsdangerous>=0.24 (from flask->donkeycar==2.5.7) Using cached https://files.pythonhosted.org/packages/76/ae/44b03b253d6fade317f32c24d100b3b35c2239807046a4c953c7b89fa49e/itsdangerous-1.1.0-py2.py3-none-any.whl Collecting Werkzeug>=0.14 (from flask->donkeycar==2.5.7) Using cached https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl Collecting Jinja2>=2.10 (from flask->donkeycar==2.5.7) Using cached https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl Collecting monotonic>=1.4 (from eventlet->donkeycar==2.5.7) Using cached https://files.pythonhosted.org/packages/ac/aa/063eca6a416f397bd99552c534c6d11d57f58f2e94c14780f3bbf818c4cf/monotonic-1.5-py2.py3-none-any.whl Collecting greenlet>=0.3 (from eventlet->donkeycar==2.5.7) Collecting dnspython>=1.15.0 (from eventlet->donkeycar==2.5.7) Using cached https://files.pythonhosted.org/packages/ec/d3/3aa0e7213ef72b8585747aa0e271a9523e713813b9a20177ebe1e939deb0/dnspython-1.16.0-py2.py3-none-any.whl Collecting pytz>=2011k (from pandas->donkeycar==2.5.7) Using cached https://files.pythonhosted.org/packages/61/28/1d3920e4d1d50b19bc5d24398a7cd85cc7b9a75a490570d5a30c57622d34/pytz-2018.9-py2.py3-none-any.whl Collecting python-dateutil>=2.5.0 (from pandas->donkeycar==2.5.7) Using cached https://files.pythonhosted.org/packages/41/17/c62faccbfbd163c7f57f3844689e3a78bae1f403648a6afb1d0866d87fbb/python_dateutil-2.8.0-py2.py3-none-any.whl Collecting MarkupSafe>=0.23 (from Jinja2>=2.10->flask->donkeycar==2.5.7) Using cached https://files.pythonhosted.org/packages/f0/00/a6aea33f5598b080b86d6b6d1214b51afe3ffa6100b902d5aa465080083f/MarkupSafe-1.1.1-cp36-cp36m-macosx_10_6_intel.whl Installing collected packages: tornado, python-engineio, python-socketio, click, itsdangerous, Werkzeug, MarkupSafe, Jinja2, flask, monotonic, greenlet, dnspython, eventlet, pytz, python-dateutil, pandas, donkeycar Found existing installation: tornado 4.5.1 Uninstalling tornado-4.5.1: Successfully uninstalled tornado-4.5.1 Found existing installation: Werkzeug 0.12.2 Uninstalling Werkzeug-0.12.2: Successfully uninstalled Werkzeug-0.12.2 Running setup.py develop for donkeycar Successfully installed Jinja2-2.10 MarkupSafe-1.1.1 Werkzeug-0.14.1 click-7.0 dnspython-1.16.0 donkeycar eventlet-0.24.1 flask-1.0.2 greenlet-0.4.15 itsdangerous-1.1.0 monotonic-1.5 pandas-0.24.1 python-dateutil-2.8.0 python-engineio-3.4.3 python-socketio-4.0.0 pytz-2018.9 tornado-4.5.3
And when I run the createcar, you can see it worked as expected. In my case creating the ‘mycar’ folder in my home directory. Of course you can choose this wherever you prefer.
(donkey) AMAC02XN1T9JGH5:donkeycar amit.bahree$ donkey createcar ~/mycar using donkey version: 2.5.7 ... Creating car folder: /Users/amit.bahree/mycar making dir /Users/amit.bahree/mycar Creating data & model folders. making dir /Users/amit.bahree/mycar/models making dir /Users/amit.bahree/mycar/data making dir /Users/amit.bahree/mycar/logs Copying car application template: donkey2 Copying car config defaults. Adjust these before starting your car. Donkey setup complete.
It is interesting to see this is more stable on Windows, than on a Mac. Also, one last thing to leave you with – when I first ran the installation, the hint that someone was wrong was in the output, but I didn’t pay too much attention to it. See the red line highlighted in the output below.

Don’t know at this time on what the solution for moviepy is to get this sorted – luckily its not a big deal at the moment.
threads
Some people, when confronted with a problem, think, ‘I know, I’ll use threads’ – and then two they hav erpoblesms.
#GeekyJokes and if you don’t get it, see this. 🙂
VSCode + Python on a mac
As my experimentation continues, I wanted to get Visual Studio Code installed on a mac, and wanted to use python as the language of choice – main reason for the mac is to understand and explore the #ML libraries, runtimes, and their support on a mac (both natively and in containers – docker).
Now, Microsoft has a very nice tutorial to get VSCode setup and running on a mac, including some basic configuration (e.g. touchbar support). But when it comes to getting python setup, and running, that is a different matter. Whilst the tutorial is good, it doesn’t actually work and errors out.
Below is the code that Microsoft outlines in the tutorial for python. It essentially is the HelloWorld using packages and is quite simple; but this will fail and won’t work.
import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 20, 100) # Create a list of evenly-spaced numbers over the range plt.plot(x, np.sin(x)) # Plot the sine of each x point plt.show() # Display the plot
When you run this, you will see an error that is something like the one outlined below.
2019-01-18 14:23:34.648 python[38527:919087] -[NSApplication _setup:]: unrecognized selector sent to instance 0x7fbafa49bf10 2019-01-18 14:23:34.654 python[38527:919087] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSApplication _setup:]: unrecognized selector sent to instance 0x7fbafa49bf10' *** First throw call stack: ( 0 CoreFoundation 0x00007fff521a1ecd __exceptionPreprocess + 256 1 libobjc.A.dylib 0x00007fff7e25d720 objc_exception_throw + 48 2 CoreFoundation 0x00007fff5221f275 -[NSObject(NSObject) __retain_OA] + 0 3 CoreFoundation 0x00007fff52143b40 ___forwarding___ + 1486 4 CoreFoundation 0x00007fff521434e8 _CF_forwarding_prep_0 + 120 5 libtk8.6.dylib 0x000000011523031d TkpInit + 413 6 libtk8.6.dylib 0x000000011518817e Initialize + 2622 7 _tkinter.cpython-37m-darwin.so 0x0000000114fb2a0f _tkinter_create + 1183 8 python 0x0000000101836ba6 _PyMethodDef_RawFastCallKeywords + 230 9 python 0x00000001019772b1 call_function + 257 10 python 0x0000000101974daf _PyEval_EvalFrameDefault + 45215 11 python 0x0000000101968a42 _PyEval_EvalCodeWithName + 418 12 python 0x0000000101835867 _PyFunction_FastCallDict + 231 13 python 0x00000001018b9481 slot_tp_init + 193 14 python 0x00000001018c3441 type_call + 241 15 python 0x0000000101836573 _PyObject_FastCallKeywords + 179 16 python 0x000000010197733f call_function + 399 17 python 0x0000000101975052 _PyEval_EvalFrameDefault + 45890 18 python 0x0000000101836368 function_code_fastcall + 120 19 python 0x0000000101977265 call_function + 181 20 python 0x0000000101974daf _PyEval_EvalFrameDefault + 45215 21 python 0x0000000101968a42 _PyEval_EvalCodeWithName + 418 22 python 0x0000000101835867 _PyFunction_FastCallDict + 231 23 python 0x0000000101839782 method_call + 130 24 python 0x00000001018371e2 PyObject_Call + 130 25 python 0x00000001019751c6 _PyEval_EvalFrameDefault + 46262 26 python 0x0000000101968a42 _PyEval_EvalCodeWithName + 418 27 python 0x0000000101836a73 _PyFunction_FastCallKeywords + 195 28 python 0x0000000101977265 call_function + 181 29 python 0x0000000101974f99 _PyEval_EvalFrameDefault + 45705 30 python 0x0000000101836368 function_code_fastcall + 120 31 python 0x0000000101977265 call_function + 181 32 python 0x0000000101974f99 _PyEval_EvalFrameDefault + 45705 33 python 0x0000000101968a42 _PyEval_EvalCodeWithName + 418 34 python 0x0000000101836a73 _PyFunction_FastCallKeywords + 195 35 python 0x0000000101977265 call_function + 181 36 python 0x0000000101974f99 _PyEval_EvalFrameDefault + 45705 37 python 0x0000000101968a42 _PyEval_EvalCodeWithName + 418 38 python 0x0000000101836a73 _PyFunction_FastCallKeywords + 195 39 python 0x0000000101977265 call_function + 181 40 python 0x0000000101974daf _PyEval_EvalFrameDefault + 45215 41 python 0x0000000101968a42 _PyEval_EvalCodeWithName + 418 42 python 0x00000001019cc9a0 PyRun_FileExFlags + 256 43 python 0x00000001019cc104 PyRun_SimpleFileExFlags + 388 44 python 0x00000001019f7edc pymain_main + 9148 45 python 0x0000000101808ece main + 142 46 libdyld.dylib 0x00007fff7f32bed9 start + 1 47 ??? 0x0000000000000003 0x0 + 3 ) libc++abi.dylib: terminating with uncaught exception of type NSException [Done] exited with code=null in 1.017 seconds
The main reason this fails is that one has to be a little more explicit with matplot (the library that we are trying to use). Matplot has this concept of backends, which essentially is the runtime dependencies needed to support various execution environments – including both interactive and non-interactive environments.
For matplot to work on a mac, the raster graphics c++ library that it uses is based on something called Anti-Grain Geometry (AGG). And for the library to render, we need to be explicit on which agg to use (there are multiple raster libraries).
In addition on a mac OS X there is a limitation when rendering in OSX windows (presently lacks blocking show() behavior when matplotlib is in non-interactive mode).
To get around this, we explicitly tell matplot to use the specific agg (“TkAgg in our case) and then it will all work. I have a updated code sample below, which adds more points, and also waits for the console input, so one can see what the output looks like.
import matplotlib matplotlib.use("TkAgg") from matplotlib import pyplot as plt import numpy as np def waitforuser(): input("Press enter to continue ...") return x = np.linspace(0, 50, 200) # Create a list of evenly-spaced numbers over the range y = np.sin(x) print(x) waitforuser() print(y) waitforuser() plt.plot(x,y) plt.show()
And incase you are wondering what it looks like, below are a few screenshots showing the output.



To get everything working, make sure you setup the Linting, debugger, and the python environment properly. And of course, you can go nuts with containers! Happy coding!
Azure Cognitive Services in containers is the smart way to go

{Cross posted from my post on Avanade}
Containers just got smarter.
That’s the news from Microsoft, which announced recently that Azure Cognitive Services now supports containers. The marriage of AI and containers is a technology story, of course, but it’s a potentially even bigger business story, one that affects where and how you can do business and gain competitive advantage.
First, the technology story
Containers aren’t new, of course. They’re an increasingly popular technology with a big impact on business. That’s because they boost the agility and flexibility with which a business can roll out new tools to employees and new products and services to customers.
With containers, a business can get software releases and changes out faster and more frequently, increasing its competitive advantage. Because containers abstract applications from their underlying operating systems and other services—like virtual machines abstracted from hardware—those applications can run anywhere: in the cloud, on a laptop, in a kiosk or in an intelligent Internet-of-Things (IoT) edge device in the field.
In many respects this frees up the application’s developer, who can focus on creating the best, most useful software for the business. With Microsoft’s announcement, that software can now more easily include object detection, vision recognition, text and language understanding.
At Avanade, we take containers a step further by including support for them in our modern engineering platform, a key part of our overall approach to intelligent IT. So, you can automate your creation and management of containers—including AI-enabled containers—for a faster, easier, more seamless DevOps process. You can take greater advantage of IoT capabilities and move technologies such as AI closer to the edge, where they can reduce latency and boost performance.
What AI containers do for business
And you can do much more, which is where the business story gets interesting. With the greater agility and adaptability that comes with container-based AI services, you can respond more quickly to new competition, regulatory environments and business models. That contrasts with the more limited responses that have been possible with traditional, cloud-based AI.
For example, data sovereignty laws and GDPR requirements generally restrict the transfer of data to the cloud, where cloud-based cognitive services can interact with it. Now, with containers that support cognitive services, you can avoid those restrictions by running your services locally.
A retail bank might use containerized AI to identify customers, address their needs, process payments and offer additional services, boosting customer satisfaction and bank revenue—all without sending private financial data outside the region (or even outside the bank) in accordance with GDPR.
Similarly, regional medical centers and clinics subject to HIPAA privacy laws in the US can process protected information on site with containerized AI to cut patient wait times and deliver better health outcomes.
Or, think about limited-connectivity or disconnected environments—such as manufacturing shop floors, remote customer sites or oil rigs or tankers—that can’t count on accessing AI that resides in the always-on cloud. Previously, these sites might have had to batch their data to process it during narrow periods of cloud connectivity, with the delays greatly limiting the timeliness and usefulness of AI.
Now, these sites can combine IoT and AI to anticipate and respond to manufacturing disruptions before they occur, increasing safety, productivity and product quality while reducing errors and costs.
If you can’t bring your data to your AI, now you can bring your AI to your data. That’s the message of container-hosted AI and the modern engineering platform. Together, they optimize your ability to bring AI into environments where you can’t count on the cloud. Using AI where you couldn’t before makes innovative solutions possible—and innovative solutions deliver competitive advantage.
Boost ROI and scale
If you’re already using Azure Cognitive Services, you’ve invested time and money to train the models that support your use cases. Because those models are now portable, you can take advantage of them in regulated, limited-connectivity and disconnected environments, increasing your return on that investment.
You can also scale your use of AI with a combination of cloud- and container-based architectures. That enables you to apply the most appropriate architectural form for any given environment or use. At the same time, you’re deploying consistent AI technology across the enterprise, increasing reliability while decreasing your operating cost.
Keep in mind…
Here are three things to keep in mind as you think about taking advantage of this important news:
- Break the barriers between your data scientists and business creatives. Containerized cognitive services is about far more than putting AI where you couldn’t before. It’s about using it in exciting new ways to advance the business. Unless you have heterogeneous teams bringing diverse perspectives to the table, you may miss some of the most important innovation possibilities for your business.
- You need a cloud strategy that’s not just about the cloud. If you don’t yet have a cloud strategy, you’re behind the curve. But if your cloud strategy is limited to the cloud, you may be about to fall behind the next curve. Microsoft’s announcement is further proof that the cloud is crucial to the enterprise—and also part of a larger environment, including both legacy and edge platforms, with which it must integrate.
- Be prepared for the ethics issues. Putting cognitive services in places you couldn’t before could raise new ethics issues. After all, we’re talking about the ability to read people’s expressions and even their emotions. This shouldn’t put you off—but it should put you on alert. Plug your ethics committee into these discussions when appropriate. If you don’t already have an ethics committee, create one. But that’s another post. 🙂
Want to learn more?
Microsoft’s announcement furthers the democratization of AI: the use of AI in more places and in more ways throughout the enterprise and beyond. Whether you turn to us for your AI solutions or look to us to assist you in developing your own, we’re ready to help with the greatest concentration of Microsoft expertise outside of Microsoft itself.
Bugs
It is a known bug with the programming language. 🙂
#GeekyJokes #ProgrammerHumor
Docker container running Ubuntu on Windows
Containers are all the rage right now and rightfully so – not only do they help abstract away some of the complexity and dependencies of your apps and solutions, they also make managing of environments, and, deployments much simpler. And the fact that you can do it in a consistent, and repeatable fashion is just icing on the cake.
As a simple example, with Docker, on Windows (as in my case), I can run a dockerized app, on a different OS than the host, which can also be interactive.
The command below will spawn a container, pull down the image of Ubuntu and then run an interactive terminal, tying the terminal to the standard input. Of course in this example, this requires that you already have Docker installed (the Community Edition would be just fine to play around with).
docker run --interactive --tty ubuntu bash
Now, with Docker if you do get the following error (on Windows): “Error response from daemon: operating system on which parent image was created is not Windows.” as also shown below, the way to fix it is to switch on Experimental features.

To try and fix this, right click on the docker icon in the system tray, choose Settings, and from the setting screen, in the Daemon tab, enable experimental features as shown below.


And after enabling the experimental features, the docker daemon will restart. And post that, if you run the docker command again, it would work as expected:
- It pulls down the image (which is used to run in the container)
- Runs Ubuntu in an interactive session (this is because of the option I choose)
- And all within my PowerShell console on Windows.

This is just the beginning, there of course is a lot more to it. 🙂
Ubuntu on Surface Book
I am writing this on a Microsoft Surface Book, running Ubuntu natively, and there isn’t any Windows option – I blew away, the Windows partition, and there isn’t any other OS on it.
Why, some of you might think? Well, why not. 🙂 For me the motive is two fold: one am a geek and love to hack what works and cannot work – how else will one learn? And two, explore and see which AI frameworks, tools, and runtimes works better on Linux natively
Well I must say, this experiment has been a pleasant surprise and much more successful that I originally thought of. Most of the things are working quite well on Surface with Ubuntu – including touch and pen (both seem like mouse clicks). As the screenshot below shows, Ubuntu is running quite nicely – including most of the features. There are a few things that quite don’t – I have them listed later in the post.

So much so, that Visual Studio code is running natively and whilst I haven’t had a chance to use it much (yet), that fact that it can even so much was something I wasn’t expecting without running some containers or VM’s or the likes.

So, how does one go about doing this? It is quite simple these days to be honest. Below are the steps I had followed. I do think the real magic is the hard work that JakeDay has done to get the kernel and firmware supported.
Disclaimer: My experience outlined here is related to the Surface Book – it can also run and be supported on other Surface devices, and the exact nature of what works or doesn’t work would be a little different.
- Hardware – Have a USB keyboard and mouse handy just in case; and if you are on a Surface Pro or something with only one usb port, then a usb hub. And you of course would need a USB drive to boot Ubuntu off.
- Disable Secure boot – without this getting the bootloader sequence would be challenging. If you aren’t sure how, then check out the instructions here to disable secure boot.
- Delete / Shrink the windows partition – If you don’t care about Windows and have a copy of the license somewhere to get back you might want to just delete this. If you do want to shrink it (say this is your primary machine and you want to get back at some point, then goto Disk Management in Windows and resize the partition – keep this to at least 50 GB.
- Ubuntu USB drive – if you don’t have one already, create a ubuntu bootable usb drive. You can get more instructions here. And if you are on Windows, I would recommend using Rufus.
- Install Ubuntu – Boot off the usb drive you created, and before that make sure you have disabled secure boot. I would pick most of the default options for Ubuntu for now.
- Patched Kernel – Once you have ubuntu running, I would recommend installing the patched kernel and headers that allows for Surface support. Steps for these are outlined below and need to be execute in a terminal.
- Install Dependencies: sudo apt install git curl wget sed
- Clone the repo: git clone https://github.com/jakeday/linux-surface.git ~/linux-surface
- Change working directory: cd ~/linux-surface
- Run setup: sudo sh setup.sh
- Reboot on the patched kernel
Change boot kernel: Finally, after you have rebooted, the odds of Ubuntu booting off the ‘right’ kernel is quite slim and best to manually pick this. You can of course use the grub, or what I find better – install the grub customizer, and then choose the correct option as shown below. Once picked and you had hit save, you also need to run the following in a terminal to make these persist: sudo update-grub

And that is all to it for getting the base install and customization running.
If you are super curious on what that setup script does, the code is below (also listed on github). What is interesting to see the various hardware models supported.
LX_BASE="" LX_VERSION="" if [ -r /etc/os-release ]; then . /etc/os-release if [ $ID = arch ]; then LX_BASE=$ID elif [ $ID = ubuntu ]; then LX_BASE=$ID LX_VERSION=$VERSION_ID elif [ ! -z "$UBUNTU_CODENAME" ] ; then LX_BASE="ubuntu" LX_VERSION=$VERSION_ID else LX_BASE=$ID LX_VERSION=$VERSION fi else echo "Could not identify your distro. Please open script and run commands manually." exit fi SUR_MODEL="$(dmidecode | grep "Product Name" -m 1 | xargs | sed -e 's/Product Name: //g')" SUR_SKU="$(dmidecode | grep "SKU Number" -m 1 | xargs | sed -e 's/SKU Number: //g')" echo "\nRunning $LX_BASE version $LX_VERSION on a $SUR_MODEL.\n" read -rp "Press enter if this is correct, or CTRL-C to cancel." cont;echo echo "\nContinuing setup...\n" echo "Coping the config files under root to where they belong...\n" cp -Rb root/* / echo "Making /lib/systemd/system-sleep/sleep executable...\n" chmod a+x /lib/systemd/system-sleep/sleep read -rp "Do you want to replace suspend with hibernate? (type yes or no) " usehibernate;echo if [ "$usehibernate" = "yes" ]; then if [ "$LX_BASE" = "ubuntu" ] && [ 1 -eq "$(echo "${LX_VERSION} >= 17.10" | bc)" ]; then echo "Using Hibernate instead of Suspend...\n" ln -sfb /lib/systemd/system/hibernate.target /etc/systemd/system/suspend.target && sudo ln -sfb /lib/systemd/system/systemd-hibernate.service /etc/systemd/system/systemd-suspend.service else echo "Using Hibernate instead of Suspend...\n" ln -sfb /usr/lib/systemd/system/hibernate.target /etc/systemd/system/suspend.target && sudo ln -sfb /usr/lib/systemd/system/systemd-hibernate.service /etc/systemd/system/systemd-suspend.service fi else echo "Not touching Suspend\n" fi read -rp "Do you want use the patched libwacom packages? (type yes or no) " uselibwacom;echo if [ "$uselibwacom" = "yes" ]; then echo "Installing patched libwacom packages..." dpkg -i packages/libwacom/*.deb apt-mark hold libwacom else echo "Not touching libwacom" fi if [ "$SUR_MODEL" = "Surface Pro 3" ]; then echo "\nInstalling i915 firmware for Surface Pro 3...\n" mkdir -p /lib/firmware/i915 unzip -o firmware/i915_firmware_bxt.zip -d /lib/firmware/i915/ fi if [ "$SUR_MODEL" = "Surface Pro" ]; then echo "\nInstalling IPTS firmware for Surface Pro 2017...\n" mkdir -p /lib/firmware/intel/ipts unzip -o firmware/ipts_firmware_v102.zip -d /lib/firmware/intel/ipts/ echo "\nInstalling i915 firmware for Surface Pro 2017...\n" mkdir -p /lib/firmware/i915 unzip -o firmware/i915_firmware_kbl.zip -d /lib/firmware/i915/ fi if [ "$SUR_MODEL" = "Surface Pro 4" ]; then echo "\nInstalling IPTS firmware for Surface Pro 4...\n" mkdir -p /lib/firmware/intel/ipts unzip -o firmware/ipts_firmware_v78.zip -d /lib/firmware/intel/ipts/ echo "\nInstalling i915 firmware for Surface Pro 4...\n" mkdir -p /lib/firmware/i915 unzip -o firmware/i915_firmware_skl.zip -d /lib/firmware/i915/ fi if [ "$SUR_MODEL" = "Surface Pro 2017" ]; then echo "\nInstalling IPTS firmware for Surface Pro 2017...\n" mkdir -p /lib/firmware/intel/ipts unzip -o firmware/ipts_firmware_v102.zip -d /lib/firmware/intel/ipts/ echo "\nInstalling i915 firmware for Surface Pro 2017...\n" mkdir -p /lib/firmware/i915 unzip -o firmware/i915_firmware_kbl.zip -d /lib/firmware/i915/ fi if [ "$SUR_MODEL" = "Surface Pro 6" ]; then echo "\nInstalling IPTS firmware for Surface Pro 6...\n" mkdir -p /lib/firmware/intel/ipts unzip -o firmware/ipts_firmware_v102.zip -d /lib/firmware/intel/ipts/ echo "\nInstalling i915 firmware for Surface Pro 6...\n" mkdir -p /lib/firmware/i915 unzip -o firmware/i915_firmware_kbl.zip -d /lib/firmware/i915/ fi if [ "$SUR_MODEL" = "Surface Laptop" ]; then echo "\nInstalling IPTS firmware for Surface Laptop...\n" mkdir -p /lib/firmware/intel/ipts unzip -o firmware/ipts_firmware_v79.zip -d /lib/firmware/intel/ipts/ echo "\nInstalling i915 firmware for Surface Laptop...\n" mkdir -p /lib/firmware/i915 unzip -o firmware/i915_firmware_skl.zip -d /lib/firmware/i915/ fi if [ "$SUR_MODEL" = "Surface Book" ]; then echo "\nInstalling IPTS firmware for Surface Book...\n" mkdir -p /lib/firmware/intel/ipts unzip -o firmware/ipts_firmware_v76.zip -d /lib/firmware/intel/ipts/ echo "\nInstalling i915 firmware for Surface Book...\n" mkdir -p /lib/firmware/i915 unzip -o firmware/i915_firmware_skl.zip -d /lib/firmware/i915/ fi if [ "$SUR_MODEL" = "Surface Book 2" ]; then echo "\nInstalling IPTS firmware for Surface Book 2...\n" mkdir -p /lib/firmware/intel/ipts if [ "$SUR_SKU" = "Surface_Book_1793" ]; then unzip -o firmware/ipts_firmware_v101.zip -d /lib/firmware/intel/ipts/ else unzip -o firmware/ipts_firmware_v137.zip -d /lib/firmware/intel/ipts/ fi echo "\nInstalling i915 firmware for Surface Book 2...\n" mkdir -p /lib/firmware/i915 unzip -o firmware/i915_firmware_kbl.zip -d /lib/firmware/i915/ echo "\nInstalling nvidia firmware for Surface Book 2...\n" mkdir -p /lib/firmware/nvidia/gp108 unzip -o firmware/nvidia_firmware_gp108.zip -d /lib/firmware/nvidia/gp108/ fi if [ "$SUR_MODEL" = "Surface Go" ]; then echo "\nInstalling ath10k firmware for Surface Go...\n" mkdir -p /lib/firmware/ath10k unzip -o firmware/ath10k_firmware.zip -d /lib/firmware/ath10k/ fi echo "Installing marvell firmware...\n" mkdir -p /lib/firmware/mrvl/ unzip -o firmware/mrvl_firmware.zip -d /lib/firmware/mrvl/ read -rp "Do you want to set your clock to local time instead of UTC? This fixes issues when dual booting with Windows. (type yes or no) " uselocaltime;echo if [ "$uselocaltime" = "yes" ]; then echo "Setting clock to local time...\n" timedatectl set-local-rtc 1 hwclock --systohc --localtime else echo "Not setting clock" fi read -rp "Do you want this script to download and install the latest kernel for you? (type yes or no) " autoinstallkernel;echo if [ "$autoinstallkernel" = "yes" ]; then echo "Downloading latest kernel...\n" urls=$(curl --silent "https://api.github.com/repos/jakeday/linux-surface/releases/latest" | grep '"browser_download_url":' | sed -E 's/.*"([^"]+)".*/\1/') resp=$(wget -P tmp $urls) echo "Installing latest kernel...\n" dpkg -i tmp/*.deb rm -rf tmp else echo "Not downloading latest kernel" fi echo "\nAll done! Please reboot."
Lastly, below are the things not working for me – none of these are deal breakers but something to be aware of.
- Cameras are not supported – either of the two.
- Dedicated GPU (if you have one). This was a little bummed out as I got the dedicated GPU for some of the #MachineLearning experimentation, but then this whole thing is a different type of experimentation, so am OK.
- Can control the volume using the speaker widget thing on the top right corner, but the volume buttons on top aren’t.
- Sleep / Hibernation – It has some issues and for now I have sleep disabled but have hibernation setup.
- Detaching the screen will immediately terminate everything and power off the machine (not a clean poweroff) – I am guessing it cannot transition between the two batteries of the base and the screen. However if already detached then it will work without any issues.
Happy hacking!
Roots of #AI
The naming is unfortunate when talking about #AI. There isn’t anything about intelligence – not as we humans know of it. If we can rewind back to the 50’s we can perhaps rename it to something like Computational Intelligence, which is more accurate. And although I have outlined the difference between some of the elements of AI in the past, I wanted to get back to what the intent was and how this area started.
Can machines think? Some say, the origins of #AI go back to Turing and started with his paper “Computing machinery and intelligence” (PDF) when it was published in 1950.Whilst, Turing might have planed the seed, it was a program called Logic Theorist created Allen Newell, Cliff Shaw, and Herbert Simon which was the first #ArtificialIntelligence program. Of course it wasn’t called #AI then.
That started back in 1956 when a Logic Theorist was presented at a conference in Dartmouth College called “Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI)” (PDF). The term “#AI” was coined at the conference.
Since then, AI has had a roller coaster of a ride over the decades – from colder than hell (I presume) winters, to hotter than lava with it being everywhere. As someone said, time will heal all wounds.

Today, many of us use #AI, #DeepLearning, and, #MachineLearning interchangeably. Over the course of last couple of years, I have learned to ignore that, but fundamentally the distinction is important.
AI, we would say is more computational intelligence – allowing computers to do tasks that would be difficult for humans to do, certainly at scale. And these tasks are accomplished using different mechanisms and techniques, using “intelligent agents”.

Machine learning is a subset of AI, where the program or algorithm can learn from previous outputs, and improve based on that data – hence the “learning” part. It is akin to it learning from experience, but isn’t the same thing as we humans can comprehend and understand. Some of us think, the program is rewriting itself, which technically isn’t an accurate description.
Deep Learning is a set of techniques and algorithms of machine learning that are inspired from how the neurals in our brain connect together and work. These set of techniques are also called Neural Networks, and essentially are nothing but type of machine learning

For any of this AI “magic” to work, the one thing it needs to feed on is data. Without data, none of this would be possible. This data is classified into two categories – features and labels.
- Features – these are aspects of whatever we are interested in. For example if we are interested in vehicles features could be the colour, make, and, model of the vehicle.
- Labels – these are buckets of categories we put the things we are interested in. Using the same vehicles examples, we can have labels such as SUV, Sedan, Sports Car, Trucks, etc. that categorize vehicles.
One key principle to remember when it comes to #AI – all the outcomes that are described are in the terms of probabilities and not absolutes. All it suggests is the likelihood of something to happen, and most things cannot be predicted with total certainty. And this fundamental aspect one should remember when making decisions.
There isn’t a universal definition of AI, which sometimes doesn’t help. Each has their own perception. I have gotten over it to come to their terms and ensure we are talking the same lingo and meaning. It doesn’t help to get academic about it. 🙂
For example taking three leading analysts (Gartner, IDC, and Forrester) definition of AI (outlined below) is a good indicator on how this can get confusing.
- Gartner – At its core, AI is about solving business problems in novel ways. It stretches across any organization from innovation, R&D and IT to data science.
- IDC defines cognitive/Artificial Intelligence (AI) systems as a set of technologies that use deep natural language processing and understanding to answer questions and provide recommendations and direction. IDC’s coverage of cognitive/AI systems examines:
- Digital assistants
- Automated advisors
- Artificial intelligence, deep learning and machine learning
- Automated recommendation systems
- Forrester defines AI as a liberatory technology at its core, and businesses that integrate it will free workers to become more innovative, creative, and adaptive than ever before. But these technologies are still in early stages.
And the field is just exploding now – not just with new research around #DeepLearning or #MachineLearning, but also net new aspects from a business perspectives; things like:
- Digital Ethics
- Conversational AI
- Democratization of AI
- Data Engineering (OK, not new, but certainly key)
- Model Management
- RPA (or #IntelligentAutomation)
- AI Strategy
It is a new and exciting world that spans multiple spectrum. Don’t try and drink from the fire-hose, but take it in slowly, appreciate the nuances and what one brings value and discuss in terms of outcomes.
Computer – a male or female?
So, both these arguments make sense. I can’t decide which one is accurate.
Patent – Systems and methods for organizing and presenting skill progressions
This has been a long time coming – our patent filed a about 4 years ago was finally awarded today by the USPTO. Some details below.
United States Patent 10,102,774
Bahree , et al. October 16, 2018
Systems and methods for organizing and presenting skill progression
In any organization, the skills collectively possessed by individuals of the organization can determine the capabilities of the organization as a whole. Previously, there was no centralized method or system for managing skills which are complex and wide-ranging. There was also no effective way for individuals to review skills they possess and to discover other skills which they can cross-train and leverage—either to enhance their existing roles and responsibilities, or possibly change skills and get involved with another area and thereby grow their career. The limited visualizations of skill sets offered to the individuals were static and non-interactive, which is not ideal.
When organizations grow and begin hiring new technical employees, this tremendous influx of new resources and talent makes the overall skill set of the organization increasingly difficult to comprehend. The challenge gets increasingly difficult over time. Further, to allow such companies to both retain and attract talent such companies want to ensure that they can provide a clear path for employees to manage their careers and talent growth effectively. Such companies are also challenged to be able to efficiently allocate technical resources, and to visualize technical areas in which their current employees are strong, and areas in which their current employees need further training (or new employees need to be recruited) to help the company compete in the marketplace.
This patent represents a subset of our work on cohesive systems, methods, and devices for presenting and managing interrelated sets of skills for a person. We used a map interface to represent a set of interrelated skills to a user, and which allows the user an opportunity to strategize regarding how best the related and advanced skills may be acquired to advance on a career path.
The convergence of Tech Trends – in the past Mobility, Big Data, and Cloud (and today #DataScience, #ModernEngineering, #AI, #ML, and #Cloud) helps the creation of modern skills management systems. The solution at the heart of the patent is help address this and we deem to have wide applicability across industry domains, industry sectors, and vertical industry segments.
Since filing the patent, and awarding today – elements of this we have adopted at Avanade and rolled it out globally to our workforce across 20 countries allowing them to help manage complex skills, advance career and help establish a 3D career path.
What is MVP?
#MVP you ask? #EnoughSaid

Update on Tesla .ssq files
Sometime back, I noticed the car downloaded a large file (5.1 GB) which was a .ssq file. I hadn’t heard of a ssq file, and was curious on what this was.
I researched a little and as it turns out, a .ssq file is a compressed file system which is often used in an embedded Linux system, where storage size might be a area of concern. This file-system is called SquashFS, and is usually used on a read-only mode.
SquashFS is interesting, as it lets one mount the file-system directly and is distributed as a kernel source patch – which makes it easy to daisy chain and use it other regular Linux tools.
SquashFS tools are useful to mount and create a SquashFS file-system. As shown below, I can mount the downloaded file, using unsquashfs.

I think it is known that Tesla uses Valhalla for their maps and this file is the updated maps data. Valhalla, is a open source routing library which is using OpenStreetMap. Valhalla, also incorporates the traditional travelling salesman problem which is a non-deterministic polynomial problem.
When extracted and mounted, we see the following directory structure; each of these folders (and files therein) are in fact the tiles that make up the maps (next time in the car, when you zoom in or out or search of a non-cached location, notice carefully on how it is loading and you can just about make out the tiles – it is quick and easy to miss). And it is these tiles that is used for routing as part of the navigation.

Tiled based routing is supposed to be beneficial – it uses less memory (the graph can be decomposed much easier, with a smaller set of it loaded in memory), cahce-able, easier to manage (update-able), etc. We can see a glimpse on how the routing and calculation happen on a tile basis below.

When, extracted we see there are three levels of hierarchy (0, 1, and, 2). In the file-system these are shown as directories, but there is a method to the madness.
- Level 0 – these contain edges pertaining to roads that are considered highway / freeway / motorway roads. These are stored as 4 degree tiles.
- Level 1 – contains roads that are at a arterial level and are saved in 1 degree tiles.
- Level 2 – these are local roads and are saved as 0.25 degree tiles.
For example, the world at Level 0 would look like what we are seeing in the image below. And Pennsylvania can be seen below that; Level 0 colored in light blue, Level 1 in light green, and finally Level 2 in light red (which might not be obvious with the translucency).


So, to use this, one can use a few helper functions to get the exact tile to load and vice-versa. For example using the GPS coordinate of 41.413203, -73.623787 (which is just outside of Brewster, NY), loading Level 2 (via the get_title_2 function) would give us the structure of /2/000/756/425.gph using which we know which tile to load.
Helper function (in python) that help obtain levels, tile ids, tile lists, lat/long coordinates, etc. from an intersecting box.
valhalla_tiles = [{'level': 2, 'size': 0.25}, {'level': 1, 'size': 1.0}, {'level': 0, 'size': 4.0}]
LEVEL_BITS = 3
TILE_INDEX_BITS = 22
ID_INDEX_BITS = 21
LEVEL_MASK = (2**LEVEL_BITS) - 1
TILE_INDEX_MASK = (2**TILE_INDEX_BITS) - 1
ID_INDEX_MASK = (2**ID_INDEX_BITS) - 1
INVALID_ID = (ID_INDEX_MASK << (TILE_INDEX_BITS + LEVEL_BITS)) | (TILE_INDEX_MASK << LEVEL_BITS) | LEVEL_MASK
def get_tile_level(id):
return id & LEVEL_MASK
def get_tile_index(id):
return (id >> LEVEL_BITS) & TILE_INDEX_MASK
def get_index(id):
return (id >> (LEVEL_BITS + TILE_INDEX_BITS)) & ID_INDEX_MASK
def tiles_for_bounding_box(left, bottom, right, top):
#if this is crossing the anti meridian split it up and combine
if left > right:
east = tiles_for_bounding_box(left, bottom, 180.0, top)
west = tiles_for_bounding_box(-180.0, bottom, right, top)
return east + west
#move these so we can compute percentages
left += 180
right += 180
bottom += 90
top += 90
tiles = []
#for each size of tile
for tile_set in valhalla_tiles:
#for each column
for x in range(int(left/tile_set['size']), int(right/tile_set['size']) + 1):
#for each row
for y in range(int(bottom/tile_set['size']), int(top/tile_set['size']) + 1):
#give back the level and the tile index
tiles.append((tile_set['level'], int(y * (360.0/tile_set['size']) + x)))
return tiles
def get_tile_id(tile_level, lat, lon):
level = filter(lambda x: x['level'] == tile_level, valhalla_tiles)[0]
width = int(360 / level['size'])
return int((lat + 90) / level['size']) * width + int((lon + 180 ) / level['size'])
def get_ll(id):
tile_level = get_tile_level(id)
tile_index = get_tile_index(id)
level = filter(lambda x: x['level'] == tile_level, valhalla_tiles)[0]
width = int(360 / level['size'])
height = int(180 / level['size'])
return int(tile_index / width) * level['size'] - 90, (tile_index % width) * level['size'] - 180
Tesla has actually open-sourced their implementation of Valhalla, which is based on C++. This still seems like an active project, but parts of the code haven’t been updated for a while.
Whilst I haven’t tried to set this up myself, it seems quite simple. Below are the instructions to get this going on Ubuntu or Debian (I think Mac is also supported, but needs a little different dependency set).
#below are the dependencies needed
sudo add-apt-repository -y ppa:valhalla-core/valhalla
sudo apt-get update
sudo apt-get install -y autoconf automake make libtool pkg-config g++ gcc jq lcov protobuf-compiler vim-common libboost-all-dev libboost-all-dev libcurl4-openssl-dev libprime-server0.6.3-dev libprotobuf-dev prime-server0.6.3-bin
#if you plan to compile with data building support, see below for more info
sudo apt-get install -y libgeos-dev libgeos++-dev liblua5.2-dev libspatialite-dev libsqlite3-dev lua5.2
if [[ $(grep -cF xenial /etc/lsb-release) > 0 ]]; then sudo apt-get install -y libsqlite3-mod-spatialite; fi
#if you plan to compile with python bindings, see below for more info
sudo apt-get install -y python-all-dev
#install with the following
git submodule update --init --recursive
./autogen.sh
./configure
make test -j$(nproc)
sudo make install
There you have it – we know now what the .ssq files are and how they are used. Just need more time to get it going and play with it – perhaps another project for another time. 🙂
Tesla and Spotify
Something seems to be up, with the car tickling an endpoint for connectivity perhaps? Its only 663 bytes up and 222 bytes down. This is still on v8.1 (36.2)

Tesla v9 API endpoints
In case you haven’t been following the news, Tesla is in the process of releasing the new firmware beta. I think many folks online are super interested in new autopilot upgrades.
I reverse engineered the associated app and there are certainly a few new end points exposed, as outlined below. Need time to now figure out more details on this and what they entail. Also need time to see what changes in the existing code and json (data structure).
Is it interesting to go noodle on this, and see the associated calls. This outlines all the products as of today
{
"AUTHENTICATE": {
"TYPE": "POST",
"URI": "oauth/token",
"AUTH": false
},
"REVOKE_AUTH_TOKEN": {
"TYPE": "POST",
"URI": "oauth/revoke",
"AUTH": true
},
"PRODUCT_LIST": {
"TYPE": "GET",
"URI": "api/1/products",
"AUTH": true
},
"VEHICLE_LIST": {
"TYPE": "GET",
"URI": "api/1/vehicles",
"AUTH": true
},
"VEHICLE_SUMMARY": {
"TYPE": "GET",
"URI": "api/1/vehicles/{vehicle_id}",
"AUTH": true
},
"VEHICLE_DATA": {
"TYPE": "GET",
"URI": "api/1/vehicles/{vehicle_id}/data",
"AUTH": true
},
"WAKE_UP": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/wake_up",
"AUTH": true
},
"UNLOCK": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/door_unlock",
"AUTH": true
},
"LOCK": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/door_lock",
"AUTH": true
},
"HONK_HORN": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/honk_horn",
"AUTH": true
},
"FLASH_LIGHTS": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/flash_lights",
"AUTH": true
},
"CLIMATE_ON": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/auto_conditioning_start",
"AUTH": true
},
"CLIMATE_OFF": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/auto_conditioning_stop",
"AUTH": true
},
"CHANGE_CLIMATE_TEMPERATURE_SETTING": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/set_temps",
"AUTH": true
},
"CHANGE_CHARGE_LIMIT": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/set_charge_limit",
"AUTH": true
},
"CHANGE_SUNROOF_STATE": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/sun_roof_control",
"AUTH": true
},
"ACTUATE_TRUNK": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/actuate_trunk",
"AUTH": true
},
"REMOTE_START": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/remote_start_drive",
"AUTH": true
},
"CHARGE_PORT_DOOR_OPEN": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/charge_port_door_open",
"AUTH": true
},
"CHARGE_PORT_DOOR_CLOSE": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/charge_port_door_close",
"AUTH": true
},
"START_CHARGE": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/charge_start",
"AUTH": true
},
"STOP_CHARGE": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/charge_stop",
"AUTH": true
},
"MEDIA_TOGGLE_PLAYBACK": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/media_toggle_playback",
"AUTH": true
},
"MEDIA_NEXT_TRACK": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/media_next_track",
"AUTH": true
},
"MEDIA_PREVIOUS_TRACK": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/media_prev_track",
"AUTH": true
},
"MEDIA_NEXT_FAVORITE": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/media_next_fav",
"AUTH": true
},
"MEDIA_PREVIOUS_FAVORITE": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/media_prev_fav",
"AUTH": true
},
"MEDIA_VOLUME_UP": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/media_volume_up",
"AUTH": true
},
"MEDIA_VOLUME_DOWN": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/media_volume_down",
"AUTH": true
},
"SEND_LOG": {
"TYPE": "POST",
"URI": "api/1/logs",
"AUTH": true
},
"RETRIEVE_NOTIFICATION_PREFERENCES": {
"TYPE": "GET",
"URI": "api/1/notification_preferences",
"AUTH": true
},
"SEND_NOTIFICATION_PREFERENCES": {
"TYPE": "POST",
"URI": "api/1/notification_preferences",
"AUTH": true
},
"RETRIEVE_NOTIFICATION_SUBSCRIPTION_PREFERENCES": {
"TYPE": "GET",
"URI": "api/1/vehicle_subscriptions",
"AUTH": true
},
"SEND_NOTIFICATION_SUBSCRIPTION_PREFERENCES": {
"TYPE": "POST",
"URI": "api/1/vehicle_subscriptions",
"AUTH": true
},
"DEACTIVATE_DEVICE_TOKEN": {
"TYPE": "POST",
"URI": "api/1/device/{device_token}/deactivate",
"AUTH": true
},
"CALENDAR_SYNC": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/upcoming_calendar_entries",
"AUTH": true
},
"SET_VALET_MODE": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/set_valet_mode",
"AUTH": true
},
"RESET_VALET_PIN": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/reset_valet_pin",
"AUTH": true
},
"SPEED_LIMIT_ACTIVATE": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/speed_limit_activate",
"AUTH": true
},
"SPEED_LIMIT_DEACTIVATE": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/speed_limit_deactivate",
"AUTH": true
},
"SPEED_LIMIT_SET_LIMIT": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/speed_limit_set_limit",
"AUTH": true
},
"SPEED_LIMIT_CLEAR_PIN": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/speed_limit_clear_pin",
"AUTH": true
},
"SCHEDULE_SOFTWARE_UPDATE": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/schedule_software_update",
"AUTH": true
},
"CANCEL_SOFTWARE_UPDATE": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/cancel_software_update",
"AUTH": true
},
"POWERWALL_ORDER_SESSION_DATA": {
"TYPE": "GET",
"URI": "api/1/users/powerwall_order_entry_data",
"AUTH": true
},
"POWERWALL_ORDER_PAGE": {
"TYPE": "GET",
"URI": "powerwall_order_page",
"AUTH": true,
"CONTENT": "HTML"
},
"ONBOARDING_EXPERIENCE": {
"TYPE": "GET",
"URI": "api/1/users/onboarding_data",
"AUTH": true
},
"ONBOARDING_EXPERIENCE_PAGE": {
"TYPE": "GET",
"URI": "onboarding_page",
"AUTH": true,
"CONTENT": "HTML"
},
"REFERRAL_DATA": {
"TYPE": "GET",
"URI": "api/1/users/referral_data",
"AUTH": true
},
"REFERRAL_PAGE": {
"TYPE": "GET",
"URI": "referral_page",
"AUTH": true,
"CONTENT": "HTML"
},
"MESSAGE_CENTER_MESSAGE_LIST": {
"TYPE": "GET",
"URI": "api/1/messages",
"AUTH": true
},
"MESSAGE_CENTER_MESSAGE": {
"TYPE": "GET",
"URI": "api/1/messages/{message_id}",
"AUTH": true
},
"MESSAGE_CENTER_COUNTS": {
"TYPE": "GET",
"URI": "api/1/messages/count",
"AUTH": true
},
"MESSAGE_CENTER_MESSAGE_ACTION_UPDATE": {
"TYPE": "POST",
"URI": "api/1/messages/{message_id}/actions",
"AUTH": true
},
"MESSAGE_CENTER_CTA_PAGE": {
"TYPE": "GET",
"URI": "messages_cta_page",
"AUTH": true,
"CONTENT": "HTML"
},
"AUTH_COMMAND_TOKEN": {
"TYPE": "POST",
"URI": "api/1/users/command_token",
"AUTH": true
},
"SEND_DEVICE_KEY": {
"TYPE": "POST",
"URI": "api/1/users/keys",
"AUTH": true
},
"DIAGNOSTICS_ENTITLEMENTS": {
"TYPE": "GET",
"URI": "api/1/diagnostics",
"AUTH": true
},
"SEND_DIAGNOSTICS": {
"TYPE": "POST",
"URI": "api/1/diagnostics",
"AUTH": true
},
"BATTERY_SUMMARY": {
"TYPE": "GET",
"URI": "api/1/powerwalls/{battery_id}/status",
"AUTH": true
},
"BATTERY_DATA": {
"TYPE": "GET",
"URI": "api/1/powerwalls/{battery_id}",
"AUTH": true
},
"BATTERY_POWER_TIMESERIES_DATA": {
"TYPE": "GET",
"URI": "api/1/powerwalls/{battery_id}/powerhistory",
"AUTH": true
},
"BATTERY_ENERGY_TIMESERIES_DATA": {
"TYPE": "GET",
"URI": "api/1/powerwalls/{battery_id}/energyhistory",
"AUTH": true
},
"BATTERY_BACKUP_RESERVE": {
"TYPE": "POST",
"URI": "api/1/powerwalls/{battery_id}/backup",
"AUTH": true
},
"BATTERY_SITE_NAME": {
"TYPE": "POST",
"URI": "api/1/powerwalls/{battery_id}/site_name",
"AUTH": true
},
"BATTERY_OPERATION_MODE": {
"TYPE": "POST",
"URI": "api/1/powerwalls/{battery_id}/operation",
"AUTH": true
},
"SITE_SUMMARY": {
"TYPE": "GET",
"URI": "api/1/energy_sites/{site_id}/status",
"AUTH": true
},
"SITE_DATA": {
"TYPE": "GET",
"URI": "api/1/energy_sites/{site_id}/live_status",
"AUTH": true
},
"SITE_CONFIG": {
"TYPE": "GET",
"URI": "api/1/energy_sites/{site_id}/site_info",
"AUTH": true
},
"HISTORY_DATA": {
"TYPE": "GET",
"URI": "api/1/energy_sites/{site_id}/history",
"AUTH": true
},
"BACKUP_RESERVE": {
"TYPE": "POST",
"URI": "api/1/energy_sites/{site_id}/backup",
"AUTH": true
},
"SITE_NAME": {
"TYPE": "POST",
"URI": "api/1/energy_sites/{site_id}/site_name",
"AUTH": true
},
"OPERATION_MODE": {
"TYPE": "POST",
"URI": "api/1/energy_sites/{site_id}/operation",
"AUTH": true
},
"TIME_OF_USE_SETTINGS": {
"TYPE": "POST",
"URI": "api/1/energy_sites/{site_id}/time_of_use_settings",
"AUTH": true
},
"STORM_MODE_SETTINGS": {
"TYPE": "POST",
"URI": "api/1/energy_sites/{site_id}/storm_mode",
"AUTH": true
},
"SEND_NOTIFICATION_CONFIRMATION": {
"TYPE": "POST",
"URI": "api/1/notification_confirmations",
"AUTH": true
},
"NAVIGATION_REQUEST": {
"TYPE": "POST",
"URI": "api/1/vehicles/{vehicle_id}/command/navigation_request",
"AUTH": true
}
}
Atom
Never trust an atom, they make up everything. 🤓
#GeekyJokes
#ML concepts – Regularization, a primer
Regularization is a fundamental concept in Machine Learning (#ML) and is generally used with activation functions. It is the key technique that help with overfitting.
Overfitting is when an algorithm or model ‘fits’ the training data too well – it seems to good to be true. Essentially overfitting is when a model being trained, learns the noise in the data instead of ignoring it. If we allow overfitting, then the network only uses (or is more heavily influenced) by a subset of the input (the larger peaks), and doesn’t factor in all the input.
The worry there being that outside of the training data, it might not work as well for ‘real world’ data. For example the model represented by the green line in the image below (credit: Wikipedia), follows the sample data too closely and seems too good. On the other hand, the model represented by the black line, which is better.

Regularization helps with overfitting (artificially) penalizing the weights in the neural network. These weights are represented as peaks, and this reduces the peaks in the data. This ensure that the higher weights (peaks) don’t overshadow the rest of the data, and hence getting it to overfit. This diffusion of the weight vectors is sometimes also called weight decay.
Although there are a few regularization techniques for preventing overfitting (outlined below), these days in Deep Learning, L1 and L2 regression techniques are more favored over the others.
- Cross validation: This is a method for finding the best hyper parameters for a model. E.g. in a gradient descent, this would be to figure out the stopping criteria. There are various ways to do this such as the holdout method, k-fold cross validation, leave-out cross validation, etc.
- Step-wise regression: This method essentially is a serial step-by-step regression where one reduces the weakest variable. Step-wise regression essentially does multiple regression a number of times, each time removing the weakest correlated variable. At the end you are left with the variables that explain the distribution best. The only requirements are that the data is normally distributed, and that there is no correlation between the independent variables.
- L1 regularization: In this method, we modify the cost function by adding the sum of the absolute values of the weights as the penalty (in the cost function). In L1 regularization the weights shrinks by a constant amount towards zero. L1 regularization is also called Lasso regression.
- L2 regularization: In L2 regularization on the other hand, we re-scale the weight to a subset factor – it shrinks by an amount that is proportional to the weight (as outlined in the image below). This shrinking makes the weight smaller and is also sometimes called weight decay. To get this shrinking proportional, we take a squared mean of the weights, instead of the sum. At face value it might seem that the weight eventually get to zero, but that is not true; typically other terms cause the weights to increase. L2 regularization is also called Ridge regression.
- Max-norm: This enforces a upper bound on the magnitude of the weight vector. The one area this helps is that a network cannot ‘explode’ when the learning rates gets very high, as it is bounded. This is also called projected gradient descent.
- Dropout: Is very simple, and efficient and is used in conjunction with one of the previous techniques. Essentially it adds a probably on the neuron to keep it active, or ‘dropout’ by setting it to zero. Dropout doesn’t modify the cost function; it modifies the network itself as shown in the image below.
- Increase training data: Whilst one can artificially expand the training set theoretically possible, in reality won’t work in most cases, especially in more complex networks. And in some cases one might think also to artificially expand the dataset, typically it is not cost effective to get a representative dataset.



Between L1 and L2 regularization, many say that L2 is preferred, but I think it depends on the problem statement. Say in a network, if a weight has a large magnitude, L2 regularization shrink the weight more than L1 and will better. Conversely, if the weight is small then L1 shrinks the weight more than L2 – and is better as it tends to concentrate the weight in fewer but more important connections in the network.
In closing, the key aspect to appreciate – the small weights (peaks) in a regularized network essentially means that as our input changes randomly (i.e. noise), it doesn’t have a huge impact to the network and its output. So this makes it difficult for the network to learn the noise and respond to that. Conversely, in an unregularized networks, that has higher weights (peaks), small random changes to those weights can have a larger impact to the behavior of the network and the information it carries.