Wednesday, February 12, 2014

OpenMV: low cost, hackable, scriptable machine vision

Introduction

OpenMV Cam will be the most hackable, low cost machine vision platform out there, because Ibrahim and I want to change the nature of hobby and educational robotics.

OpenMV Cam will be low cost; you can write scripts in Micro Python and a friendly IDE that run on the module controlling machine vision algorithms. The module supports Serial, SPI, and I2C, USB streaming for integration into robots or your desktop. It's easily hackable, based on the popular STM32F4, easily programmed with an open source toolchain; we can easily write our own software for it.

OpenMV Cam Is Available

And you can help. We're producing a short run of OpenMV Cam modules on Tindie so folks can help us add the final polish to the software. Eventually we'll do a fundraiser campaign. The firmware and IDE are pretty far along, actually. As of 9/16 we're in the process of assembling OpenMV Cam modules. You can backorder them on Tindie. Nobody gets charged until they are ready to ship.

Demonstrations

Imagine the projects you could build using 25fps face detection. An automatic bathroom mirror light, perhaps? Or automatically ring the doorbell when someone is at your door. Here's a video of using a Viola-Jones classifier for face detection to identify a face, then using FAST/FREAK to track the face regardless of scale and rotation. This is Ibrahim in the video. The video shows the IDE in action with the processing being handled by the OpenMV Cam.



What if you could do 60fps multi-color blob detection? Buy two and build stereo blob tracking? Laser scanning? Sort M&M's like a boss? Flame detection for Trinity competitions? More? This video shows blob detection as seen in the IDE with the camera module doing the processing.


Imagine what hobby electronics--or STEM education--will become when machine vision is as affordable as an Arduino?  Imagine the things we could do together, the problems we could solve.

Here's a video with the OpenMV Cam hooked up to an LCD to stream video. It'll also save video to the microSD card, and stream it to the computer as you saw above.


Join The Community

We're looking to build a community and we'd like you to join our Google Group.

Micro Python

You'll be able to script it in Micro Python, a lightweight Python for MCUs. It loads scripts off the microSD card. Some bindings are in place with full control in the works.

There's an IDE you can use with the camera that has a Python shell, a frame buffer viewer and you can use it to run scripts, save them to flash.


Also, for more flexibility, you can use several OmniVision sensors on this board: the 0.3MP OV7660, the 1.3MP OV9650/OV9655, and the 2MP JPEG OV2640. It's the latter sensor we like best and which ships with the OpenMV Cam on Tindie.

Hackable Microcontroller

What have you always wanted your machine vision system to do? Because it's running a widely known STM32F4 single core MCU, you can write and flash and debug your own firmware using an open source toolchain, CMSIS, and STM's handy Standard Peripheral Library.

There's already a growing community of support around this chip family with Pixhawk, STM32F4 Discovery boards, and more.

We're presently using an STM32F407 running at 168MHz using the native camera interface. Ibrahim has experimented with overclocking to 240MHz.

Algorithms

The OpenMV currently implements multi-object, multi-color blob tracking, Viola-Jones, Haar cascades (easily convert from OpenCV), and FAST + FREAK algorithms. You can use these to do relatively high frame-rate face and object detection.

Fundraiser

We'll be doing a fundraiser (Kickstarter, Indiegogo, or something along those lines) in the future. For now we'd like to involve you in the community and put some polish on the software.


20 comments:

  1. I like that the Pixy has servo headers for doing stand-alone pan/tilt. I also like the idea of having some IO pin breakouts for other stand-alone uses. It's nice when you don't have to add an extra mcu for simple applications.

    ReplyDelete
  2. I like that it can support several camera options. One interesting idea is a low light/IR camera. Of course color tracking wouldn't work so well in that case, but other type of MV might.

    ReplyDelete
  3. I like the idea of using SDRAM with the 429 part. I have no idea what I'd do with a few GB of RAM, but no one has ever complained that their microcontroller had too much RAM. The extra speed over the 407 part would be a bonus, too.

    I think the SD card sticks out too far in the current version. I know this is asking for the moon, but I'd try to rework the board to minimize that.

    I personally would like all unused I/O pins from the microcontroller to be brought out to headers. On a board that size, clearly that's a challenge. Maybe put a high-density socket like the connector for the camera on there and package the OpenMV with a breakout board that takes those signals out to 0.1" centers.

    I think the biggest barrier to adoption is going to be the difficulty of setting up a development environment for the ST parts. I have a five-part tutorial series on my blog going over this in detail, and it took a LOT of research to get up and running. (http://thehackerworkshop.com/?p=391) In order to promote adoption I think it's necessary to buy/build/find/make/steal a one-click dev environment installer. This is why the Arduino is so popular: they make it easy to get started. Their IDE is crap, there's no debugging facilities at all, the resulting code is slow... but it's easy to get started. Seriously, no matter how polished and ready the hardware is, don't take it to market until you get this part down. You won't have a second chance to enter the market. In order to be "The Arduino of Machine Vision" you **have** to solve this problem. I have dev environments up on both my Windows machine and my Ubuntu machine for the STM32F0/F4 parts, so it can be done, but you have to make it EASY. It looks like Ibrahim has a good start on this with his IDE. This IDE, not the hardware, is what's going to sell the product.

    The second thing you'll need to become "The Arduino of Machine Vision" is a community. Before creating a Kickstarter campaign, create a discussion forum somewhere. Build a software repository that makes it easy for people to share code. Go ahead and use a Git or Subversion back-end, but be sure to have a plain old web interface for novice users to get library files without having to install or understand a source code client tool.

    I think that Ibrahim has developed a fantastic hardware solution, and I will absolutely be a Kickstarter backer (assuming you won't sell one to me earlier). But I'm not your target audience. I'm a guy who's willing to spend weeks putting together a build environment and blog about it. Your target audience wants to buy the thing, power it on, and have it start doing cool stuff out of the box. Then, they want to go online and download other cool solutions without understanding the hardware. Only after they've marvelled at what is provided will they take the time to peel back the covers and look under the hood and say "wouldn't it be nice if..." So you need to provide some "Wow!" demo software and make it easy for them to dig in deeper. Then you need to provide tutorials and examples and community support.

    That, in my very ignorant opinion, is how you become "The Arduino of Machine Vision." If I can help you develop this product, please let me know. I'd be proud to be associated with a project like this. You can email me at my first name at my domain name dot com.

    ReplyDelete
  4. I like the idea of using SDRAM with the 429 part. I have no idea what I'd do with a few GB of RAM, but no one has ever complained that their microcontroller had too much RAM. The extra speed over the 407 part would be a bonus, too.

    I think the SD card sticks out too far in the current version. I know this is asking for the moon, but I'd try to rework the board to minimize that.

    I personally would like all unused I/O pins from the microcontroller to be brought out to headers. On a board that size, clearly that's a challenge. Maybe put a high-density socket like the connector for the camera on there and package the OpenMV with a breakout board that takes those signals out to 0.1" centers.

    I think the biggest barrier to adoption is going to be the difficulty of setting up a development environment for the ST parts. I have a five-part tutorial series on my blog going over this in detail, and it took a LOT of research to get up and running. (http://thehackerworkshop.com/?p=391) In order to promote adoption I think it's necessary to buy/build/find/make/steal a one-click dev environment installer. This is why the Arduino is so popular: they make it easy to get started. Their IDE is crap, there's no debugging facilities at all, the resulting code is slow... but it's easy to get started. Seriously, no matter how polished and ready the hardware is, don't take it to market until you get this part down. You won't have a second chance to enter the market. In order to be "The Arduino of Machine Vision" you **have** to solve this problem. I have dev environments up on both my Windows machine and my Ubuntu machine for the STM32F0/F4 parts, so it can be done, but you have to make it EASY. It looks like Ibrahim has a good start on this with his IDE. This IDE, not the hardware, is what's going to sell the product.

    The second thing you'll need to become "The Arduino of Machine Vision" is a community. Before creating a Kickstarter campaign, create a discussion forum somewhere. Build a software repository that makes it easy for people to share code. Go ahead and use a Git or Subversion back-end, but be sure to have a plain old web interface for novice users to get library files without having to install or understand a source code client tool.

    I think that Ibrahim has developed a fantastic hardware solution, and I will absolutely be a Kickstarter backer (assuming you won't sell one to me earlier). But I'm not your target audience. I'm a guy who's willing to spend weeks putting together a build environment and blog about it. Your target audience wants to buy the thing, power it on, and have it start doing cool stuff out of the box. Then, they want to go online and download other cool solutions without understanding the hardware. Only after they've marvelled at what is provided will they take the time to peel back the covers and look under the hood and say "wouldn't it be nice if..." So you need to provide some "Wow!" demo software and make it easy for them to dig in deeper. Then you need to provide tutorials and examples and community support.

    That, in my very ignorant opinion, is how you become "The Arduino of Machine Vision." If I can help you develop this product, please let me know. I'd be proud to be associated with a project like this. You can email me at my first name at my domain name dot com.

    ReplyDelete
    Replies
    1. Awesome thoughts and suggestions. I'll get in touch with you in email, Matthew.

      Delete
  5. I think that it would be really cool to be able to put 2 units together and then using the detected objects figure out how far away the detected objet is (assuming you know the distance between the 2 cameras.

    ReplyDelete
    Replies
    1. Definitely! I'm thinking stereo detection of the red AVC barrels! :) Thanks, Dave.

      Delete
  6. One of the guys working with the Tau labs INS has an STM32 development environment in a downloadable vm so that people can easily get up and going. Excellent idea that should be more widely used

    ReplyDelete
    Replies
    1. Great tip, thanks Don. You wouldn't happen to have a link or name handy?

      Delete
    2. https://github.com/TauLabs/TauLabs/wiki/Development-virtual-machine

      Delete
    3. https://github.com/TauLabs/TauLabs/wiki/Development-virtual-machine

      Delete
  7. CCV (http://ccv.nuigroup.com/) is open source software that " takes an video input stream and outputs tracking data (e.g. coordinates and blob size) and events (e.g. finger down, moved and released) that are used in building multi-touch applications."

    The only problem is that you need a PC to run it, which is a large form factor and costs $$. The OpenMV could potentially accomplish the same result but with a small footprint and low cost!

    It should be able to read IR blobs (fingers pressed against glass) or laser pointers and output x,y coordinates.

    I attempted to use a Wii Remote for this, but the low resolution of the camera and short effective distance (< 1m) prohibited it from working as a touch screen solution.

    ReplyDelete
  8. Could it detect a house from any angle? It could then be used for navigating a robot. And once Google opens up street view with an SDK, we could use their data as a reference input to OpenMV to navigate with much more precision than GPS!

    ReplyDelete
  9. Let me put the OpenMV on my AR glasses, and highlight the objects in front of me that are moving, even while I'm turning my head.

    ReplyDelete
  10. Put two OpenMV's on the Oculus Rift or an Android Tablet. Cheap but robust AR capability.

    ReplyDelete
  11. Add a laser, telephoto lens and time of flight metric to get a laser range finder?

    ReplyDelete
    Replies
    1. Wow, awesome, thanks for all the great ideas! Would love to have you join the Google Group and keep the ideas flowing -- https://groups.google.com/forum/#!forum/openmvcam

      Delete
  12. add a laser line generator and a gyro to make a 3D scanner. a robot can then map its surroundings

    ReplyDelete
  13. Ibrahim and I are making OpenMV Cam available on Tindie, limited run, for folks wanting to help us test firmware. https://www.tindie.com/products/bot_thoughts/openmv-cam/

    ReplyDelete
  14. I like the idea of ringing the doorbell automatically

    ReplyDelete

Note: Only a member of this blog may post a comment.