Skip to content

The End of Online Anonymity

Since 2015, I’ve been writing about the impact that machine learning will have on our society. One of the most concerning possibilities, in my mind, was always the potential abuse of these technologies by malicious actors to manipulate or scam people, either through subtle means or by impersonating those they trust.

Today, this concern is very much mainstream: “fake news” has become a buzzword and a kind of modern-day boogeyman. For the most part, I think most people aren’t overly worried. We know that there already are malicious actors creating sketchy content and putting it out there, but most of it seems obviously fake if you examine it more closely. We all assume that we will always be smart enough to tell real from fake, and carry on.

Media manipulation is nothing new. Attempts to control public discourse and influence the masses predate the internet, TV, newspapers and the printing press. What’s about to change is that now, with machine learning, it will become possible to turn electricity into millions of voices relentlessly spreading your gospel to every corner of the internet. At this point in time, it seems most of the fake content out there is not generated using machine learning, it’s created by human beings using puppet accounts. For the most part, someone still has to turn the crank. That limits how much content can be created and how many sources it can come from.

Personally, I’m not just worried about manipulative articles being passed as news. I’m also worried about the impact that networks of malicious bots will have on online communities. We’re still fairly far from being at the point where we can automatically generate news articles that appear convincing upon close inspection, but what about online comment threads? How difficult is it to build a bot that can write convincing one or two sentences comments?

Yesterday, I stumbled upon a link to a subreddit populated by bots based on OpenAIs GPT-2 text generation model. The result is certainly funny, but also leaves me feeling uncomfortable. Yes, much of the content is obviously fake, but many of the comments are actually believable. If you feel unimpressed, you should keep in mind that this is an individual’s side project that repurposed an existing neural network. As it is, the GPT-2 model simply generates text and completes a sentence. It’s an impressive and amusing tech demo, but not something you can easily control. In order to weaponize GPT-2, a malicious actor would need to add some kind of a guidance system: a way to condition text output of the model so as to spread a specific message.

The solution to the fake content problem may seem obvious: we can fight fire with fire, and build machine learning tools to detect machine-generated content. Tools like this are already in the works. Grover boasts 92% accuracy in detecting fake content. The sad reality, however, is that this is an arms race, and it’s not clear at all that this is something we can win. Facebook already employs thousands of human agents for the purpose of detecting and flagging malicious actors, and these people do flag a large volume of content, but they are struggling to keep up. As technology improves, fake content will become harder and harder to tell apart from real content. Manual content verification won’t be able to keep up with the volume, and automated filtering systems will fail.

In my opinion, there is only one effective way to stop fake content, and this is to verify that everyone who posts content is in fact human. You could ask people to upload pictures of themselves, but we’re already at the point where we can produce realistic images of imaginary people using GANs. Any counter-measure of this form will inevitably be defeated. Ultimately, one possibility is that online platforms will begin requiring a verified government ID in order to register. We could even end up living in a world where a kind of “e-passport”, crypto-signed government ID is attached to your every internet connection, and tracked everywhere online.

The rise of bots could render many online communities simply uninhabitable.  Large websites such as Facebook and reddit may have some hope of policing content, but smaller independent players likely won’t have the resources. We are moving towards a model where the internet is dominated by a few centralized content providers and their walled gardens, and generated content may unfortunately make it even harder for grassroots online communities to survive and grow.

I wish I had a more optimistic message for this post. I wish I could do more than point at a potential problem. Maybe there is a way to build a new web, a new kind of social media using a hash graph to implement a decentralized web of trust, something that can allow content verification without forcing everyone to sacrifice their right to remain anonymous online. I certainly think it’s a problem that’s worth thinking about, because unless we can come up with a technical solution, a regulatory solution may be imposed onto us, and it will inevitably favor the big players at the expense of the small.

Advertisements

MiniWorld: A VizDoom Alternative for OpenAI Gym

VizDoom and DMLab are two 3D simulated environments commonly used in the reinforcement learning community. VizDoom is based on the original Doom game, and DMLab is based on the Quake 3 game engine. Recently, DeepMind has produced impressive results using DMLab, showing that neural networks trained end-to-end can learn to navigate 3D environments using visual inputs alone, and even execute simple language commands.

These simulated environments are popular and useful, but they can also be difficult to work with. VizDoom can be tricky get running on your system; there are unfortunately many dependencies, and if any of them fail to build and install, you’re going to have a bad time. Furthermore, both VizDoom and DMLab are fairly impractical to customize. The Doom and Quake game engines are written in (poorly commented) C code. Also, because VizDoom is based on a game from 1993, it stores its assets in a binary format which is fairly archaic (WAD files). DMLab is nice enough to provide a scripting layer which allows you to create custom environments without touching C code or using a map editor. However, this scripting layer is in Lua and is poorly documented.

The 3D environments typically used in reinforcement learning experiments are typically fairly simple (e.g. Mazes, rooms connected by hallways, etc), and it seems obvious to me that the full Quake 3 game engine is overkill for what DeepMind has built with DMLab. As such, I set out to build MiniWorld, a minimalistic 3D engine for the purpose of building OpenAI Gym environments. MiniWorld is all written in Python and uses Pyglet (OpenGL) to produce 3D graphics. I wrote everything in Python because this language has become the “lingua franca” of the deep learning community, and I wanted MiniWorld to be easily modified and extended by students. Python is not known to be a fast language, but since the bulk of the work (3D graphics) is GPU-accelerated, MiniWorld can run at over 2000 frames per second on a desktop computer.

MiniWorld has trivial collision detection which prevents the agents from going through walls. It’s not physically accurate, and not appropriate for robotic arm control type of tasks. However, I believe it can be useful for navigation tasks where you want to procedurally generate a variety of simple environments. It can render indoor (and fake outdoor) environments made of rooms connected by hallways. The package is designed to make it easy to generate these environments procedurally using code, that is, you never have to produce map files. Because MiniWorld has been intentionally kept minimalistic, it has very few dependencies: Pyglet, NumPy and OpenAI Gym. This makes it easy to install and get working almost anywhere. This is an asset in the world of machine learning, where you may have to get experiments running on multiple compute clusters.

There are many use cases which MiniWorld cannot satisfy, but I believe that despite its simplicity, it can be a useful research tool. I’ve integrated domain randomization features which make it possible to do sim-to-real transfer experiments, similar to what I had previously implemented in the gym-duckietown environment. Domain randomization means randomly varying parameters of the environment to prevent a neural network from overfitting to the simulation, hopefully forcing the neural network to generalize to the real world. The videos below show a small robot trained to follow a red box in a simulated environment, and then tested in the real world.

If you are interested in using MiniWorld for your research, you can find the source code in this repository. Bug reports, contributions and feature requests welcome. If you feel that some specific feature would be particularly useful for your research, feel free to open an issue on GitHub. You can influence the course of this project.

A Small Dose of Optimism

Before I went to see Blade Runner 2049, I sat down with a friend to re-watch the original Blade Runner from 1982. At the beginning of the movie, there is a grim portrayal of the future. A title screen reads “Los Angeles: November 2019”. We are then shown a dark city obscured by smog, with tall buildings and even taller chimneys that have columns of oil fire and smoke projecting upwards.

BladeRunner-full

It’s 2018, and there are a lot of things to be upset and worried about. The sum of all human knowledge has never been greater than it is now, but unfortunately, our world seems to be ruled by plutocrats, and divided by identity politics. The stock market is negative for the year, and the sky might be just about to fall, or so the news media would have you believe. Thankfully, the world is not yet as grim as what Blade Runner projected, and with any luck, it probably won’t be so grim in November 2019 either.

I’m very optimistic about green energy. I have been a Tesla shareholder since 2016, and it has been amazing to watch that company’s progress. Despite all the negative press, they have produced and sold over 200,000 electric cars in 2018, and produced a profit that beat estimates in the last quarter. The nay sayers will tell you that Tesla produces luxury cars which are out of reach of the masses. That’s currently still true, but Tesla has been sticking to its plan, which is to produce cheaper and cheaper cars as they perfect the technology and achieve economy of scale. Their cheapest car, the mid-range Model 3, is currently priced for $45,000 USD, which is a lot cheaper than the first car they ever sold, the Roadster, which sold for $122,000 USD (adjusted for inflation).

In my opinion, Tesla is succeeding in its stated mission: “to accelerate the world’s transition to sustainable energy”. China and Europe are ahead of the curve in electric vehicle production and adoption, but Tesla, GM and Nissan are already selling electric cars in the US, and almost all major automakers have announced plans to begin producing them as well: even Ford has announced a Mustang EV. Tesla is forcing every automaker to see the obvious: if you can make them cheap and give them enough range, there is massive pent up demand for electric cars. The electric car revolution is happening, for real.

The common argument against electric cars is that the electricity they use is often produced from non-renewable sources, negating their environmental benefit. I’ll begin by stating that this isn’t true everywhere. Quebec, the Canadian province where I’m from, produced 95.2% of its power from hydro, and 3.6% from wind in 2016. However, even if all the power in the world was generated by burning coal, the fact is that industrial power plants are much more efficient at converging fossil fuels into energy than car engines. They can be made more efficient because they don’t have the same size and weight constraints as car engines do.

Many will argue that solar power and wind can’t possibly replace fossil fuels because the sun doesn’t always shine and the wind doesn’t always blow. This problem is being addressed as well. Tesla are building large batteries that can be installed in your home or at large power plants. Multiple startups are also working on utility-scale energy storage solutions. The cost of renewable energy technologies is still a little steep, but solar panels, windmills and lithium-ion batteries have been steadily becoming more affordable every year.

Currently, it’s possible to buy and install a 5KW solar setup that includes a home battery for about $15,000 USD. This is enough for one person to live completely off the grid. That price may seem high, but if you amortize it over a 20 year mortgage, and you factor in energy savings, you realize that it’s already within reach of many people. Assuming that the cost of solar cells and batteries keeps falling, we will soon reach a point where renewable energy is available 24 hours a day, and cheaper than fossil fuels. When this happens, it will be economically unjustifiable to use fossil fuels for energy, and the transition will not only be quick, but also inevitable.

Global warming is still a problem, particularly if you take into account the runaway greenhouse effect. However, the transition to renewables could happen faster than you think. It’s not unrealistic to think that in less than 5 years, electric cars will be about as cheap as gasoline cars, and in less than a decade, we may see some oil and gas corporations going bankrupt. It’s also very possible that the cost of solar cells and energy storage will keep going down even after they become price-competitive with non-renewable energy technologies. Within this century, we may enter an era where energy is cheap and plentiful.

That’s great, but what about all the other problems in the world? If you look beyond the negative headlines, there are many reasons to remain optimistic. According to multiple sources, the number of people living in conditions of extreme poverty is hitting an all time low. It’s also likely that cheaper and cleaner energy will help us not just with clean air, but also alleviate issues of scarcity in many parts of the world. Solar panels and batteries can produce energy anywhere in the world, and it so happens that many of the poorest countries get plenty of sunlight.

Building a Simple Self-Driving Car Simulator

As part of my new job, I’ve been working with Professor Liam Paull and his students on building a simulator for Duckietown. This is a university course being taught at ETH Zurich, the University of Montreal, TTIC/UChicago and other institutions, where students learn about self-driving cars and robotics by building their own small-scale model which can drive around in the miniature Duckietown universe, complete with turns, intersections, traffic signs, and other moving vehicles.

The course has, so far, been focused on traditional robotics methods using computer vision and PID controllers to drive the robots. However, students and professors are becoming increasingly interested in using deep learning (and reinforcement learning in particular). Since reinforcement learning needs a lot of data, it’s much more practical to do it in simulation than with physical robots. This has been one of the main motivations for building a simulator.

The simulator I’m building works as an OpenAI Gym environment, which makes it easy to use for reinforcement learning. It’s written in pure Python and uses OpenGL (pyglet) to produce graphics. I chose Python because this is the language most used by the deep learning community, and I wanted for students to be able to modify the simulator easily. The simulator performs many forms of domain randomization: it randomly varies colors, the camera angle and field of view, etc. This feature is meant to help neural networks learn to deal with variability, so that trained networks will hopefully work not just in simulation, but also in the real world as well.

Because Duckietown is a toy universe that is essentially two-dimensional, I’ve designed a YAML map format which can be easily hand-edited to create custom new environments. It essentially describes a set of tiles, and where to place 3D models relative to those tiles. I’ve also made some simplifying assumptions with regard to physics (the agent essentially moves along a 2D plane).

While working on this simulator, I’m also experimenting with sim-to-real transfer, that is, getting policies trained in simulation to work with a real robot. This is a work in progress, but we are close to having this working reliably. The video below shows some promising results:

If you’re interested in playing with this, the source code is available on GitHub. Feedback and contributions are welcome. If you run into any issues, please report them, as I’d like to make the simulator as easy to use as possible.

 

(Spread)sheet Music: Making a Simple Music Sequencer using CSV Spreadsheets

On Friday night, I put together a little hack that I found quite amusing. It’s a music sequencer that uses CSV spreadsheets to loop and sequence beats. It has the useful feature that you can save the spreadsheet, and it will periodically reload it, so that you can edit and jam with your sequences live. Best of all, it’s around 100 lines of Python code, comments and all. I posted a video I recorded on twitter, and the reception was very positive, so I thought I would provide a short writeup, and make the source code available as a gist for anyone interested.

You might wonder why I wrote this program, besides the humorous and attention-grabbing aspects. Clearly, there are already a variety of sophisticated programs (DAWs, Digital Audio Workstations) such as Ableton Live out there which can do much more complex things. Part of the motivation is that I like to play with music programming, and by writing my own code to sequence music, I can write a program that will have exactly the workflow and features that I want.

One of the things that I wanted to play with, here, was the ability to create melodies using only the notes in a given scale. Ableton Live Lite, or at least the version I have here, for all its powerful features, doesn’t have a mode that will highlight a given scale (at least not without plug-ins). I think there is a lot of value in being able to write software with an interface that is fully customized for the things you want to play with, and in this case, writing a music sequencer is trivially easy, so why not?

IMG-4036.JPG

The spreadsheet sequencer uses the mido package to send MIDI output to hardware synthesizers that I have at home, namely a Novation Bass Station II and an Arturia Drumbrute. These are directly connected to my Linux PC using USB. Note that you do not need to own hardware instruments to play with Python and MIDI. I play with hardware synths because I enjoy having physical knobs I can turn, but there are many free software programs that will connect via the MIDI protocol, both synthesizers and samplers.

It’s also possible to use MIDI for input. There are devices called MIDI controllers which can connect to your computer via USB. These can be keyboard, or boards with physical knobs you can turn to adjust parameters as you jam live. You can try searching for MIDI controller on eBay if that’s something that interests you. There are some used ones you can get for very cheap.

To conclude, I will note that although I came up with the idea on my own, I wasn’t the first to think of using a spreadsheet program to sequence music. I hope that this post has encouraged you to explore your musical creativity, or just to program something fun :)

Minimalism in Programming

I’m 32, and I’ve been programming actively for over 16 years at this point. I don’t have a lifetime of experience doing this, but over the years, I’ve come to develop a certain style, a kind of philosophy or methodology that I try to apply in everything I do. Crucially, I would say that I’m a minimalist. I like to build things that are only as complex as they need to be to accomplish their purpose. I like to distill ideas to their simplest form.

Much of what I will discuss in this post may seem like common sense to many of you. I’m probably not the first one to tell you about the principles of KISS and YAGNI. Unfortunately, I think that the art of building simple, robust engineering is something that is rarely taught in universities, poorly understood, and often disregarded. We live in a world full of bug-ridden, poorly written software, with a thousand useless bells an whistles. I don’t believe it has to be this way. In my view, much of the bugs we encounter could be avoided if more programmers followed some basic principles to minimize complexity.

Back when I was a teenager, back in the early 2000s, one of the first programming projects I embarked on was some ambitious 3D game. I recruited several people to work on this project with me. We produced a lot of art, and we did implement a game engine and even a map editor. We had some nice screenshots to show. Unfortunately, no game ever came out of it. One of the main issues is that there was a lack of focus on my part. I wanted to build something more awesome than Unreal, Quake 3 and Half-Life, and I thought I needed killer tech to do this, but I didn’t really have a specific game in mind. I had no specific goal, and so no concrete plan. I would guide the project in whatever direction seemed most interesting at the moment. Every two weeks, I’d start on some new feature for the game engine, but never quite finish it. We most likely could have build a game, if I’d been willing to aim at a simpler, more realistic objective.

These days, before I even start on a new project, I try to spend some time doing some research to convince myself that this project is worth doing, that I have the time to do it, and that I can set realistic goals. I try to start small. I ask myself what is the smallest, simplest version of my idea that I could implement, with the least amount of features, and I try to plan out the steps I will need to complete to get to that. Simply put, the first step, in my view, is to clearly outline what the Minimum Viable Product (MVP) is going to be. Defining an MVP helps me stay focused, and it also ensures that I have a goal simple enough that I can be sure I’ll stay motivated long enough to get there.

Many people make the mistake of thinking that if they don’t immediately account for all the features they could possibly want to add to a project from the beginning, they might paint themselves into a corner, unable to refactor the code, unable to bring the project where they ultimately want it to be. My counter-argument would be that refactorings are inevitable. You will make design choices that turn out to be wrong. You will need to change your code. You simply can’t account for every possibility and every single interaction from the beginning, because there are too many unknowns. If you start with an MVP, you will gain a lot of insight in the process. You will also have a working product that is very simple, and so very easy to refactor.

Trying to build simple products will help you keep people motivated, gain insights, and ultimately reach your goals. It might also help you avoid bugs. Less code means less corner cases, less things to test, less that can break, and less to debug. This is something that good engineers understand very well. When you minimize the number of moving parts, when you minimize complexity, you minimize the chances that your product, and your project, will fail.

I think most of what I’ve said so far is commonly accepted wisdom among seasoned programmers. What I’d like to bring up next is that minimizing the complexity of your software, eliminating possible points of failure, is not just about simplifying the code you write. In my opinion, it also applies to the things your software stands on. The software you build necessarily makes a number of assumptions, and has external dependencies. Most programmers, it seems to me, follow the “don’t reinvent the wheel” philosophy. If something has already been implemented, you should just use it, never implement your own. This is seen as a way to minimize the complexity of your software. The problem is that not all external dependencies are created equal.

Every library that you import is a piece of software you don’t have control over. It’s something that needs to be built and installed in order for your software to runs. It’s a black box with its own many dependencies and possible points of failure. How often have you tried to install a library or piece of software and found that it was broken out of the box? If your software has 15 external dependencies, then quite possibly, over the next year, one of these will break, and your software will be broken along with it. If you’re programming in Python, chances are that your software will break several times over the next few months, as packages are being changed and broken under your feet.

When I write software, I try to minimize the number of dependencies I rely on. I do this both to minimize possible points of failure, and to make sure that people installing my software won’t have a terrible time getting it to work. When I have to rely on external dependencies, I try to pick the ones that are more established and well-maintained rather than obscure ones. Sometimes, I will “reinvent the wheel”, when I judge that the effort required is small enough. Obviously, this doesn’t always make sense. If you roll your own crypto and you’re not a crypto researcher, you deserve to be slapped upside the head. However, if you need to load textures in your indie game, you very well could implement a parser for 24-bit TGA images instead of relying on some library which itself has 50 external dependencies. This can be done in less than 100 lines of code.

There are other ways in which you can minimize points of failure and make your software more reliable. You can prefer simple file formats, prefer open standards, and avoid proprietary solutions. Ultimately, the best way to keep things simple, however, is to simply have less features. I’m a minimalist. I prefer to build software that does one thing and does it well. Ultimately, you’ll never make everyone happy, you’ll never satisfy every use case, not without contorting your design into something unmaintainable and fragile.

 

My new Job in RL & Robotics Research

After I completed my PhD in early 2016, I took a job with the GPU compiler team at Apple and moved from Canada to the United States. I wasn’t sure if Silicon Valley was for me, but I figured that if I was going to try living in another country, it was best to make that kind of move when I was young, without property, children or a life partner.

My job at Apple was in many ways a dream job. It was sunny almost everyday, I worked on a small team of highly qualified friendly engineers, had an awesome boss, ate lots of insanely great organic food, and spent my work days sitting in a comfy Aeron chair. Unfortunately, I wasn’t quite happy there. For one thing, I didn’t quite mesh with Apple’s secretive culture. I didn’t love the culture of Silicon Valley that much either, but the biggest problem was that the isolation got to me. Living in an American suburb far from everything I knew, and dealing with illness in the family back home, I ended up having to take antidepressants for the first time in my life. I decided to move back to Montreal, the city I love, in the hope of living a healthier and happier life.

It’s been nine months since I started my new job as a staff member slash research assistant at the Montreal Institute for Learning Algorithms (MILA). It’s one of the biggest (if not the biggest) university research labs focused on artificial intelligence, headed by Professor Yoshua Bengio. This job doesn’t come with a Silicon Valley salary, but in my view, it’s a good mix of the perks I could get in an industry job, combined with the freedom that comes with academic research. The schedule is very flexible, the work is highly experimental, and best of all, as a research assistant, I have the opportunity to publish, but not the obligation.

You might be surprised at the current shift in career plans. I did my PhD in compiler design, so why am I working in a deep learning lab? Obviously, there’s a lot of excitement surrounding machine learning and AI right now. I, like many others, believe that this technology will transform the world in a million ways, most of which we haven’t even begun to imagine. That’s one reason why I’m here: I’ve shared that excitement for a long time. However, that’s not the only reason. Fundamentally, I like research. This is a very active area of research. Compilers, as cool as they are, are not a very active research topic anymore. Most industry compiler jobs revolve around maintenance, and implementation of tried-and-tested ideas. Most academic compiler research is focused on incrementalism. There unfortunately isn’t that much pioneering.

I feel very fortunate that my new job has allowed me to pick which projects I would get involved in. I’ve chosen to focus on projects in the areas of reinforcement learning and robotics. Reinforcement learning holds a lot of promise as a technique for teaching algorithms new tricks. Robotics has fascinated me since I was a child, and offers me the opportunity to tinker with electronics and more concrete projects. Another great perk of this new job of mine, is that being an academic lab, they fully embrace the open sharing of ideas and information. I will be allowed to blog and discuss in detail the projects that I am working on. Stay tuned!