Skip to content

It’s Been Done Before

I’m someone who comes up with a lot of ideas. When I was a teenager, I used to constantly get excited about new projects, and I found myself often getting sidetracked, ditching existing projects before they were finished so I could start something else. It took me years to learn the discipline to choose and stick with a limited number of projects. They say that ideas are a dime a dozen, but I would say that some ideas are definitely worth much more than others. Your strongest asset, as a creative person, is to develop the ability to recognize which of your ideas have real potential, and are truly worth spending your time on.

Nowadays, when I have an idea, I often write it down and set it asides. If it’s a really interesting idea, I’ll come back to it later, and maybe flesh it out a bit. I think it’s important to figure out the details, but also to criticize your own ideas a bit, by thinking of them in adversarial terms (how could this fail?). This is a gradual, iterative process. The more fleshed out an idea, the more layers of adversarial testing it passes, the more it becomes worth spending time on. Ultimately, before you invest any real effort in a new idea, it’s also worth thinking about whether you have time to do so, and how this would affect the other projects you’re working on, and the people you’re working with.

Once I’ve sufficiently fleshed out and tested an idea in my head, if I’m still excited about it, I’ll want to discuss it with other people. That will help me get useful advice, outside perspectives on how to improve the idea, and maybe even recruit some help. At this point though, the same thing always happens, I’m inevitably going run into one or more people who give me a variant of “it’s been done before”. These people will point to some existing project that they believe is similar to what I’ve just described. Sometimes they mean well, and are just trying to help me differentiate my project or help me avoid spending effort on what would be a dead end. Sometimes it seems like they are cynics who can’t stand to see that I’m excited about something. I try to avoid working with the latter kind of person.

The most cynical among us would tell you that in movies, literature, and music, there are no more new ideas. It’s all been done before, “what’s old is new again”, all possible thoughts have already been conceived, and we’re doomed to forever rehash the same old ideas over and over again. There’s some truth to it: how many songs and movies are about boy meets girl, or the feelings that follow a bad breakup? The more songs and movies are written, the more various concepts and ideas have been explored, and the harder it becomes to come up with something truly groundbreaking and innovative. There is one caveat to this, however, which is that the world is changing constantly. What will love be like in the year 2073? It might not be quite the same as in 1982.

Your idea isn’t novel. Any software-related idea that you’ve had, someone implemented it on a Lisp Machine at MIT back back in 1977. Unfortunately, the backup tapes have been lost in a fire and there’s no evidence left, I have no material proof, so you’ll just have to take my word for it, someone did beat you to the punch.

It’s happened many times that someone told me that “it’s been done before”, without being able to actually provide any reference to a similar idea. It’s happened that, after I did some digging, I found that whatever idea the person cited was only superficially similar to what I had suggested if you squinted really, really hard. There’s also been times where someone pointed me to an existing project that was a very poor execution of my idea and basically told me that because this project had failed to take off, the idea would obviously not work.

Before you embark on a project and really invest yourself in a new idea, you should do some research and look at what’s already out there. It’s quite possible that you’ll find that your idea is not as novel as you thought it was. Still, I think that in the world of software, the worldwide context is always changing. It’s quite possible that as you start doing some research, you’ll find that others have tried to do something similar to what you want, but they didn’t execute well, or they simply had the right idea at the wrong time. Just think about electric cars. There have been many failed attempts (dating as far back as the 1890s) before there were successful commercial products. Finding such failures will provide you with an opportunity to learn from the mistakes of others.

Ultimately, having a truly novel idea might not even matter. If you have an idea for some kind of accounting software, and you find that your would-be competitors are raking big profits selling something completely broken, you might be able to eat their lunch just by executing reasonably well. It can also be a fun learning experience to recreate an existing system without necessarily looking to innovate on its design. If you have an idea that really has you excited, go for it. It’s important to have realistic expectations: you may not succeed, but you will definitely learn something along the way. One thing is for sure, which is that you’ll never succeed if you don’t even try. What’s the point of living if you’re not having any fun? Explore.

They Might Never Tell You It’s Broken

This blog post is a public service announcement (or maybe a reminder) for anyone working on a programming project that they already have, or intend to release to the public, be it as something open source, a pet project, or a startup.

I’ve worked on a few successful open source projects over the last 10 years. I would consider these projects successful in that they got hundreds of stars on GitHub and each attracted multiple open source contributors. I actually shut down one of these project because reviewing pull requests alone was becoming a second unpaid job on top of my regular job, taking multiple hours out of my evenings after work, which became exhausting, but that’s a story for another post. What I want to tell you about today is something important, that I believe any developer should know, but I personally didn’t understand until I had been working on open source projects for a few years.

As part of my PhD, I developed Higgs, an experimental JIT compiler for JavaScript written using the D programming language. I developed it on GitHub, completely in the open, and wrote about my progress on this blog. Pretty soon, the project had 300 stars on GitHub, a handful of open source contributors, and I was receiving some nice feedback. It made me happy to have so many people taking notice of my research work. As part of Higgs, I had written my own x86 machine code generator, which enabled it to do machine code pirouettes LLVM couldn’t. I did all my development on Linux, but had done my best to keep the code as portable as possible, and so, that code would work fine on MacOS as well, I had assumed. Unfortunately, I was wrong.

About a year into its development, Higgs had enough of a small community that it made sense to have create a chat room to exchange with other contributors and users. About a dozen people joined over the next two months. One day, someone I had been exchanging with on the chat room for two weeks reached out to me to signal a strange bug. They couldn’t get the tests to pass and were getting a segmentation fault. I was puzzled. They asked me if Higgs had MacOS support. I explained that I’d never tested it on MacOS myself, but I couldn’t see any reason why it wouldn’t work. I told this person that the problem was surely on their end. Higgs had been open source for over a year. It was a pretty niche project, but I knew for a fact that at least 40-60 people must have tried it, and at least 50% of these people must have been running MacOS. I assumed that surely, if Higgs didn’t run on MacOS at all, someone would have opened a GitHub issue by now. Again, I was wrong.

The problem, it turned out, was that MacOS had more strict requirements for keeping the stack pointer aligned. This wasn’t difficult to fix. The more important lesson, that I didn’t understand until that point, is that you can’t count on the people trying your project to quickly and reliably signal bugs to you. Most of the time, if it doesn’t work, they won’t report the problem. There are a few reasons why this might be:

  • They assume that someone else has already reported the problem, and there would be no point in saying anything. The bigger your project, the more likely people are to assume that someone else has already reported the issue.
  • They think the fault might be on their end, they may be confused and feel too embarrassed to reach out for help. Nobody wants to look stupid or get told to RTFM, and so, they choose silence.
  • They are just trying your project out of curiosity, and are in no way invested. They will find an alternative to your project, or go do something else.

It’s a horrifying thought, but it could be that for every one person who opens an issue on GitHub, 100 or more people have already tried your project, run into that same bug, and simply moved on. So, what can you do? You can encourage people to report bugs. I visibly write in my GitHub README that reporting bugs is encouraged and welcome: “Please tell me if something is wrong, you’re helping me make this project better.” Another obvious thing that you can do is to have robust automated testing. Some continuous integration services can automatically test on both Linux and Mac.

More broadly, if you want your project to be successful, I think it’s important to try and put yourself in the user’s shoes. Every once in a while, try installing your software from scratch, along with all the dependencies. Ideally, you want your installation process to be as simple and frictionless as possible. If your software requires the user to perform 20 steps to get it running, you’ll be losing potential users even before they have a chance to figure out if it works on their system or not. In general, I find that writing software with a minimalist design philosophy, minimizing external dependencies as much as is reasonable to do so, will help you avoid bugs, and streamline your installation process.

Balancing Fun, Stress and Profit

A little over a week ago I launched Zupiter, a browser-based music app I created over the last 8 months or so. It’s a minimalistic modular synthesizer that runs in a web browser. It’s pretty basic, and it will never replace something like Ableton Live, but the appeal is that anyone with Chrome or Firefox can make music instantly, for free, without having to download or install any software, and then share what they’ve created with friends online.

For fun, I did the launch in two parts, online and in the real world. I started by writing a post on this blog, which someone shared on Hacker News, where it remained on the front page for about 12 hours. Zupiter was later featured on Hackaday. A few days later, in the real world, with help from multiple friends, I organized a fun little launch party, featuring a live demo and performance in conjunction with Montreal-based musician Feren Isles. Since then, the app has gained a little bit of traction. The app has been visited over 29,000 times, over 260 people have created accounts, and together these people shared more than 500 projects created using Zupiter.

 

The app got a little bit of traction, and then I couldn’t help but ask myself “what’s next?”.  I have to admit, I’ve dreamed about this turning into a side-business, a startup, maybe an SaaS that can bring income. Who knows, maybe enough income that I can be my own boss one day. The only problem is, I don’t think I’m anywhere close to that. In order to get there, the app needs many improvements. It also needs a massive amount of promotion. Furthermore, I started to realize something that should have been obvious from the start.

I read an article a few days ago where Takuya Matsuyama, creator of the Inkdrop app, casually mentioned having made over $9K in sales after making the HN front page. I made the HN front page with my app too, and two people donated a total of $20 via PayPal. I’m thankful for those two donations, and I promise to the donors that I will use this money wisely. However, this article made me realize that when it comes to turning in a profit, I’ve been doing it wrong. One important detail struck me, which should have been obvious from the start: browser-based apps are most probably not where the money is.

Inkdrop sells monthly subscriptions through the Android and Apple app stores. If I want to make money with music software, I should probably be making mobile apps, because there is infrastructure set up for people to buy mobile apps, and a culture of people doing so, but that same infrastructure and culture doesn’t really exist for browser-based apps. The truth is, if I want to be making money, there’s a lot of things I should be optimizing for, which I totally haven’t been. Thinking of all those things I maybe could, or should, or would have to do got me feeling somewhat anxious, stressed, and then sad.

There’s a reason why I designed Zupiter as a browser-based app. I did it because I felt there was a niche to be filled. There are a lot of music apps for mobile, but what’s available in terms of browser-based music software is still fairly limited. I created Zupiter because it’s a tool that I myself want to use for sound design, and I wanted to use that tool on a 32″ widescreen display, with a MIDI controller, not so much on a phone or tablet. Beside that, I created Zupiter because I wanted to have fun.

There’s been other discussions on Hacker News, relating to Takuya’s article, and many of the commenters pointed out that there’s often a difference between a side-project and a business. A side-project is something you do for fun in your free time. Running a business, however, requires doing a ton of extra work that isn’t development, and making a lot of tiny decisions at every step of the way that result in optimizing for growth, and ultimately, for profit.

To me, that seems stressful. I already have a full-time job. It’s a fun, creative and interesting job. It’s also demanding enough that I don’t want to come home from work and start to stress about optimizing my side-project for profit and worry about missed opportunities. I don’t think I have the bandwidth for a second job on top of my job. The amount of work required to get to a stage where I could support myself from this source of revenue would likely push me into a burnout. At this stage, I’d rather optimize for fun than for profit. I want to manage my side-projects in a way that I try to keep it fun and avoid having it become an extra source of stress.

At this time, I don’t think I can bootstrap a side-business all on my own, it seems like an unrealistic goal for me to pursue, but that doesn’t mean it’s impossible for Zupiter and future music apps I create to bring in revenue. One of the people who showed up at my launch party was Devine, co-creator of Orca, a nifty programming language for livecoding musical patterns. I was amazed to learn that he lives with his partner on a sailboat in Japan, and makes a living creating (very cool) content and crowdfunding through Patreon.

Devine’s story got me thinking that maybe I can do something like this too. I don’t really want to develop for mobile and optimize for profit right now. I also don’t really want to quit my job and live on a sailboat, but I do want to keep creating useful software and content and share it with the world, and it would be cool if there was a way that I could save up enough to take a few months off to work on my side-projects full time someday. I went ahead and created my own Patreon page so I can get the ball rolling. In the meantime, I don’t want to focus on optimizing my side-projects for profit and growth, because I want to make sure that working on these remains fun.

The realization that Zupiter may never turn into a profitable side-business has been a bit discouraging for me, but I think I’m being honest in saying that building Zupiter has been fun and rewarding so far. I’ve accomplished an important personal goal with this project: I built my dream music making tool. Others are using this tool and finding it useful. They’ve created awesome and beautiful things with it, things I wouldn’t have built myself. Every day during the last week, I listened to the projects people shared, and found myself surprised, impressed and even a little tearful at times.

 

Zupiter: a Web-Based Modular Synthesizer

When I was a kid, I thought I didn’t like music, other people really enjoyed it, but I just didn’t get it. This all changed when I was 12, and I first saw the movie “Hackers”. You can think whatever you want of that movie, whether you think it’s a classic or everything that’s wrong with Hollywood, it had a mind-blowing soundtrack featuring several pioneering electronic music artists such as The Prodigy, Orbital and Underworld. Ever since I was exposed to this, I’ve been in love with electronic music.

In my mid-twenties, after I’d learned how to program, I started to wonder if there was a way that I could combine my passion for programming with my love of electronic music. I started reading books about sound synthesis, composition, etc. I wrote a web-based music app, MusicToy, back in 2012. This app originally used the Mozilla Audio Data API (this was before the Web Audio API was made available). The app is super simple, but that was the point, I wanted to create something that was beginner-friendly, and could allow anyone to create a fun pattern, without having any knowledge of music theory.

Today, I’d like to present to you Zupiter, a browser-based web app I’ve working on in my spare time for the past 8 months. It’s a modular synthesizer that runs in a web browser. You can use it to build a custom synthesizer using a visual programming language inspired by Pure Data and Max/MSP.

The app is written entirely in JavaScript and makes use of both the web audio and web MIDI APIs. You can play notes using your computer keyboard (A to L keys), using a built in sequencer node, or you can connect an external MIDI keyboard or sequencer to your computer and use that to play a synthesizer you created in Zupiter. The app also makes it possible to map physical knobs on an external MIDI controller to virtual knobs in the app, just by double-clicking on a virtual knob and then wiggling the knob you want to map on your device.

I created the app because I wanted to have a powerful tool that would allow me to experiment and understand exactly how specific sounds are made. I also created it because, to my knowledge, there is nothing else quite like it. Yes, there are already other browser-based music apps, but to my knowledge, at this time, no other browser-based music app offers this degree of flexibility and power.

Using Zupiter, you can build your own synth from your browser, without spending any money or installing any software. You can then share what you just created with friends instantly, by going to the Share tab. Your friends can then hear what you just made, modify it, and share a new version of it, also without installing any software on their machine. My hope is that Zupiter will lower the bar to entry, and get more people excited about making music.

If you’re curious about trying Zupiter, but a little confused about how it works, here are some examples to get you started:

Zupiter isn’t perfect, it’s only been tested in Chrome and Firefox, and it’s very possible you may encounter some bugs, but I’m really excited and curious to see what you can make with it, and I encourage you to share what you create. If you’re confused as to how it works, there is a Help tab in the app with some basic examples and instructions, and I’ve created a subreddit where you can ask any questions. Your feedback is more than welcome!

The End of Online Anonymity

Since 2015, I’ve been writing about the impact that machine learning will have on our society. One of the most concerning possibilities, in my mind, was always the potential abuse of these technologies by malicious actors to manipulate or scam people, either through subtle means or by impersonating those they trust.

Today, this concern is very much mainstream: “fake news” has become a buzzword and a kind of modern-day boogeyman. For the most part, I think most people aren’t overly worried. We know that there already are malicious actors creating sketchy content and putting it out there, but most of it seems obviously fake if you examine it more closely. We all assume that we will always be smart enough to tell real from fake, and carry on.

Media manipulation is nothing new. Attempts to control public discourse and influence the masses predate the internet, TV, newspapers and the printing press. What’s about to change is that now, with machine learning, it will become possible to turn electricity into millions of voices relentlessly spreading your gospel to every corner of the internet. At this point in time, it seems most of the fake content out there is not generated using machine learning, it’s created by human beings using puppet accounts. For the most part, someone still has to turn the crank. That limits how much content can be created and how many sources it can come from.

Personally, I’m not just worried about manipulative articles being passed as news. I’m also worried about the impact that networks of malicious bots will have on online communities. We’re still fairly far from being at the point where we can automatically generate news articles that appear convincing upon close inspection, but what about online comment threads? How difficult is it to build a bot that can write convincing one or two sentences comments?

Yesterday, I stumbled upon a link to a subreddit populated by bots based on OpenAIs GPT-2 text generation model. The result is certainly funny, but also leaves me feeling uncomfortable. Yes, much of the content is obviously fake, but many of the comments are actually believable. If you feel unimpressed, you should keep in mind that this is an individual’s side project that repurposed an existing neural network. As it is, the GPT-2 model simply generates text and completes a sentence. It’s an impressive and amusing tech demo, but not something you can easily control. In order to weaponize GPT-2, a malicious actor would need to add some kind of a guidance system: a way to condition text output of the model so as to spread a specific message.

The solution to the fake content problem may seem obvious: we can fight fire with fire, and build machine learning tools to detect machine-generated content. Tools like this are already in the works. Grover boasts 92% accuracy in detecting fake content. The sad reality, however, is that this is an arms race, and it’s not clear at all that this is something we can win. Facebook already employs thousands of human agents for the purpose of detecting and flagging malicious actors, and these people do flag a large volume of content, but they are struggling to keep up. As technology improves, fake content will become harder and harder to tell apart from real content. Manual content verification won’t be able to keep up with the volume, and automated filtering systems will fail.

In my opinion, there is only one effective way to stop fake content, and this is to verify that everyone who posts content is in fact human. You could ask people to upload pictures of themselves, but we’re already at the point where we can produce realistic images of imaginary people using GANs. Any counter-measure of this form will inevitably be defeated. Ultimately, one possibility is that online platforms will begin requiring a verified government ID in order to register. We could even end up living in a world where a kind of “e-passport”, crypto-signed government ID is attached to your every internet connection, and tracked everywhere online.

The rise of bots could render many online communities simply uninhabitable.  Large websites such as Facebook and reddit may have some hope of policing content, but smaller independent players likely won’t have the resources. We are moving towards a model where the internet is dominated by a few centralized content providers and their walled gardens, and generated content may unfortunately make it even harder for grassroots online communities to survive and grow.

I wish I had a more optimistic message for this post. I wish I could do more than point at a potential problem. Maybe there is a way to build a new web, a new kind of social media using a hash graph to implement a decentralized web of trust, something that can allow content verification without forcing everyone to sacrifice their right to remain anonymous online. I certainly think it’s a problem that’s worth thinking about, because unless we can come up with a technical solution, a regulatory solution may be imposed onto us, and it will inevitably favor the big players at the expense of the small.

MiniWorld: A VizDoom Alternative for OpenAI Gym

VizDoom and DMLab are two 3D simulated environments commonly used in the reinforcement learning community. VizDoom is based on the original Doom game, and DMLab is based on the Quake 3 game engine. Recently, DeepMind has produced impressive results using DMLab, showing that neural networks trained end-to-end can learn to navigate 3D environments using visual inputs alone, and even execute simple language commands.

These simulated environments are popular and useful, but they can also be difficult to work with. VizDoom can be tricky get running on your system; there are unfortunately many dependencies, and if any of them fail to build and install, you’re going to have a bad time. Furthermore, both VizDoom and DMLab are fairly impractical to customize. The Doom and Quake game engines are written in (poorly commented) C code. Also, because VizDoom is based on a game from 1993, it stores its assets in a binary format which is fairly archaic (WAD files). DMLab is nice enough to provide a scripting layer which allows you to create custom environments without touching C code or using a map editor. However, this scripting layer is in Lua and is poorly documented.

The 3D environments typically used in reinforcement learning experiments are typically fairly simple (e.g. Mazes, rooms connected by hallways, etc), and it seems obvious to me that the full Quake 3 game engine is overkill for what DeepMind has built with DMLab. As such, I set out to build MiniWorld, a minimalistic 3D engine for the purpose of building OpenAI Gym environments. MiniWorld is all written in Python and uses Pyglet (OpenGL) to produce 3D graphics. I wrote everything in Python because this language has become the “lingua franca” of the deep learning community, and I wanted MiniWorld to be easily modified and extended by students. Python is not known to be a fast language, but since the bulk of the work (3D graphics) is GPU-accelerated, MiniWorld can run at over 2000 frames per second on a desktop computer.

MiniWorld has trivial collision detection which prevents the agents from going through walls. It’s not physically accurate, and not appropriate for robotic arm control type of tasks. However, I believe it can be useful for navigation tasks where you want to procedurally generate a variety of simple environments. It can render indoor (and fake outdoor) environments made of rooms connected by hallways. The package is designed to make it easy to generate these environments procedurally using code, that is, you never have to produce map files. Because MiniWorld has been intentionally kept minimalistic, it has very few dependencies: Pyglet, NumPy and OpenAI Gym. This makes it easy to install and get working almost anywhere. This is an asset in the world of machine learning, where you may have to get experiments running on multiple compute clusters.

There are many use cases which MiniWorld cannot satisfy, but I believe that despite its simplicity, it can be a useful research tool. I’ve integrated domain randomization features which make it possible to do sim-to-real transfer experiments, similar to what I had previously implemented in the gym-duckietown environment. Domain randomization means randomly varying parameters of the environment to prevent a neural network from overfitting to the simulation, hopefully forcing the neural network to generalize to the real world. The videos below show a small robot trained to follow a red box in a simulated environment, and then tested in the real world.

If you are interested in using MiniWorld for your research, you can find the source code in this repository. Bug reports, contributions and feature requests welcome. If you feel that some specific feature would be particularly useful for your research, feel free to open an issue on GitHub. You can influence the course of this project.

A Small Dose of Optimism

Before I went to see Blade Runner 2049, I sat down with a friend to re-watch the original Blade Runner from 1982. At the beginning of the movie, there is a grim portrayal of the future. A title screen reads “Los Angeles: November 2019”. We are then shown a dark city obscured by smog, with tall buildings and even taller chimneys that have columns of oil fire and smoke projecting upwards.

BladeRunner-full

It’s 2018, and there are a lot of things to be upset and worried about. The sum of all human knowledge has never been greater than it is now, but unfortunately, our world seems to be ruled by plutocrats, and divided by identity politics. The stock market is negative for the year, and the sky might be just about to fall, or so the news media would have you believe. Thankfully, the world is not yet as grim as what Blade Runner projected, and with any luck, it probably won’t be so grim in November 2019 either.

I’m very optimistic about green energy. I have been a Tesla shareholder since 2016, and it has been amazing to watch that company’s progress. Despite all the negative press, they have produced and sold over 200,000 electric cars in 2018, and produced a profit that beat estimates in the last quarter. The nay sayers will tell you that Tesla produces luxury cars which are out of reach of the masses. That’s currently still true, but Tesla has been sticking to its plan, which is to produce cheaper and cheaper cars as they perfect the technology and achieve economy of scale. Their cheapest car, the mid-range Model 3, is currently priced for $45,000 USD, which is a lot cheaper than the first car they ever sold, the Roadster, which sold for $122,000 USD (adjusted for inflation).

In my opinion, Tesla is succeeding in its stated mission: “to accelerate the world’s transition to sustainable energy”. China and Europe are ahead of the curve in electric vehicle production and adoption, but Tesla, GM and Nissan are already selling electric cars in the US, and almost all major automakers have announced plans to begin producing them as well: even Ford has announced a Mustang EV. Tesla is forcing every automaker to see the obvious: if you can make them cheap and give them enough range, there is massive pent up demand for electric cars. The electric car revolution is happening, for real.

The common argument against electric cars is that the electricity they use is often produced from non-renewable sources, negating their environmental benefit. I’ll begin by stating that this isn’t true everywhere. Quebec, the Canadian province where I’m from, produced 95.2% of its power from hydro, and 3.6% from wind in 2016. However, even if all the power in the world was generated by burning coal, the fact is that industrial power plants are much more efficient at converging fossil fuels into energy than car engines. They can be made more efficient because they don’t have the same size and weight constraints as car engines do.

Many will argue that solar power and wind can’t possibly replace fossil fuels because the sun doesn’t always shine and the wind doesn’t always blow. This problem is being addressed as well. Tesla are building large batteries that can be installed in your home or at large power plants. Multiple startups are also working on utility-scale energy storage solutions. The cost of renewable energy technologies is still a little steep, but solar panels, windmills and lithium-ion batteries have been steadily becoming more affordable every year.

Currently, it’s possible to buy and install a 5KW solar setup that includes a home battery for about $15,000 USD. This is enough for one person to live completely off the grid. That price may seem high, but if you amortize it over a 20 year mortgage, and you factor in energy savings, you realize that it’s already within reach of many people. Assuming that the cost of solar cells and batteries keeps falling, we will soon reach a point where renewable energy is available 24 hours a day, and cheaper than fossil fuels. When this happens, it will be economically unjustifiable to use fossil fuels for energy, and the transition will not only be quick, but also inevitable.

Global warming is still a problem, particularly if you take into account the runaway greenhouse effect. However, the transition to renewables could happen faster than you think. It’s not unrealistic to think that in less than 5 years, electric cars will be about as cheap as gasoline cars, and in less than a decade, we may see some oil and gas corporations going bankrupt. It’s also very possible that the cost of solar cells and energy storage will keep going down even after they become price-competitive with non-renewable energy technologies. Within this century, we may enter an era where energy is cheap and plentiful.

That’s great, but what about all the other problems in the world? If you look beyond the negative headlines, there are many reasons to remain optimistic. According to multiple sources, the number of people living in conditions of extreme poverty is hitting an all time low. It’s also likely that cheaper and cleaner energy will help us not just with clean air, but also alleviate issues of scarcity in many parts of the world. Solar panels and batteries can produce energy anywhere in the world, and it so happens that many of the poorest countries get plenty of sunlight.