Car in front of the trailer with the downhole measurement system

Testing our latest product

Usually at we at BitSim help create things that are physically small, like PCBs and FPGA configuration. If we have have to use a ruler longer than 10cm we consider something to be ”large”. Not any more. Last October we performed a validation test of our latest product, and that is quite a bit bigger.

The product allows synchronous measurements to be taken over a long distance. All sensors, in this case hydrophones, can be daisy-chained. For the prototype we had a cable of 500m between the controlling electronics and the first “node” that takes the measurements from the hydrophones and sends the data to the controlling node. After that we had 20 meters between the remaining nodes. Well that was the idea, we still needed to prove that it works.

To to that we, together with partners from the Uppsala University, went to test the system. Remember that we are used to small products? This one was so big, it needed to be transported on a trailer. Since we had assembled the system in Uppsala we went there to help packing and loading the trailer. After reading the manual of the trailer a few times and some trial-and-error we got the system on the trailer and were ready to go.

Everything is ready for the long drive to Ludvika

By the time we got to our hotel it was dark and we were hungry. Some of us had been to Ludvika before and said they had a good time in a pizza place. I won’ t go into details here, but now we all have memories of that pizza place, and all subsequent meals were compared to our first meal in Ludvika.

The next day it was time to find the place where we were going to perform the tests, somewhere in the forests above the one of the mines around Ludvika. It had been raining so the forest road was a bit wet. But we got there.

Now we know the system can do some semi-offroad driving.

We started unpacking and while some of us were checking if the hole was blocked others went to pick up a generator and I started putting up the tent. When I was halfway it started to rain. I remember thinking “I can keep dry when putting in the last poles.”. When I finished I remember thinking “Why is it raining inside the tent?”. Among the tools we had packed there was a tarp which we put on top so we, and the computers were kept dry.

Once we verified the hole was not blocked, at least to a depth of about 500m, we sent down our system, well the part that is meant to go down at any rate.

The last node to go down the borehole.

Maybe I should tell a bit more about what were we going to do. The system will be used to form an image of the ground under the surface, using old boreholes. I don’t know much about how geophysicists create such an image, but I have seen them use graphs that look like the stuff you see on old seismographs. You know, those things that are used to measure earthquakes. Well, the idea is to make mini earthquakes and then capture the resulting waves. The hole has water inside, so using hydrophones we can record the sound waves, which are the result of reflections inside the ground. Mini earthquakes? Yes, mini earthquakes. This involved driving around a bobcat with a big weight that would hammer the ground. Remember the picture of the forest road? They did a similar test there.

Connecting the weight to the bobcat.

We did two kinds of tests. One where we moved the hydrophones up and down, and one where we moved the bobcat. Why moving the hydrophones up and down? Well the prototype has 5 of them at a 10 meters interval. When analysing the data more hydrophones are needed. One way is to put more hydrophones on the system, which we can do, but another way is to do this like taking a panorama picture with your phone works. Take the first picture, move a bit, take another, move a bit more etc.. In this way we were able to measure the whole hole. Luckily the cable to the hydrophones was put on a motorized drum, so moving them up and down was easy.

Waiting for the next movement of the rig.

After spending two days at the test site we were almost done and decided not to stop when it started to get dark, but continue so we could be done by the end of the day. Which we did. After packing and back-filling the potholes left by the hammering of the bobcat the field part of this test was complete. This gave us time to do something else the next day. We had been working with new technology for the mining industry for the last couple of days, so we decided to look at some old technology before driving home.

About to take a look at the old technology.

Where we went right at this crossing for the last few days, we went left this time. We spent some time looking at the Klenshyttan smelting house. I refer you to the website of the smelting house if you want to know more. With that visit the field trip was at an end, wel end. Only the drive home remained and that was nicely uneventful.

Oh, you are wondering about the other meals we had? Well, in my opinion both the Greek restaurant in the city centre as well as the Swedish restaurant next to the harbour had better food, but I won’t easily forget the pizza place.


By the time I write this, the data has already been analysed and we know the prototype functions. The data we acquired was not so interesting according to the geophysicists. They are already planning to take the prototype when they visit another site next year.

6 Sensors

12 HD camera sensors streamed 14 Gigabit/s to a PC

The design consists of 2 cards, each with an FPGA. Each FPGA receives 6 1280 x 800 HD camera sensors 120 frames per second.

Each FPGA streams the 6 channels to a 10Gb IP UDP Ethernet block (Our own IP block) directly to a PC. Everything is done in pure HW, none of the video flow is handled by the ARM CPU in the PGA in this version. Each 10Gb Ethernet cable transfers 70% of full HW speed, i.e. 7 Gbps, at a total of 14 Gbps for the PC to receive and render.

Of course, FPGAs can also encode and compress incoming data to reduce image flow or process early.

6 Sensors

12 HD camera sensor streamed 14 Gigabit/s to PC

IR video over SPI

IR sensor interface

BitSim has developed a receiver for FLIR’s Video over SPI (VoSPI), an interface to enable streaming images from a Lepton Infrared camera directly to an FPGA-based image processing system. You can use it in your platforms like:

  • On Xilinx devices with our new customized IP.
  • On every SoC circuit with an ARM CPU and Python with our pure-software driver.
  • A Python interface which integrates the VoSPI IP in your PYNQ design.

VoSPI stands for “Video over Serial Peripheral Interface”. VoSPI protocol is designed to send out the video in a format that allows transmission over a SPI interface while requiring minimal software or hardware. The sensor acts as SPI slave and the hardware acts as SPI master and the video is streamed on MISO pin. The hardware system uses custom logic to receive and render the video. The sensor sends out bytes of pixels through packets and segments to form a frame of 160×120 resolution.

The development of this IP has been done on BitSim’s Python-based development platform, SpiderPig board. Utilizing this simple interface between the Logic fabric and the high-level Python environment, debug information and image analysis could be performed almost directly after a bitfile is generated. BitSim has developed tools for Thermal Imaging and specifically to integrate the FLIR Lepton sensor by using VoSPI.

Using this IP block, it is possible to attach a low cost FLIR Lepton IR sensor, which sends processed 16-bit data to an FPGA design. The IR sensor captures infrared radiation as input. The output is a uniform thermal image with temperature measurements throughout the image. This can be used in applications such as Mobile phones, Gesture recognition, Building automation, Thermal imaging and Night vision where detection of temperature values and high temperature scenes are necessary.

IR video over SPI

Thermal image of  a person holding a hot coffee captured by Lepton 3.5 IR sensor.

Camera Electronics experiences

BitSim develops electronics for product companies, focusing on Imaging and Edge Computing. We see a constant influx of new sensors, interfaces and key components. With the following few words we want to tell you what we think is interesting in the market, but also bring up experiences, difficulties and things to think about. And, of  course we would be happy to discuss your specific needs and solutions.

Sensors
It can be really difficult to get sensors running with all configurations needed. Sometimes we find features that are not even documented. And the sensors often have hundreds of registers where most of them have to be configured in the correct way to get an image.

  • BitSim has developed a camera with Sony’s IMX290-sensor that has very good lighting properties, i.e. can handle difficult lighting conditions. It has 10/12-bit ADC, MIPI interface, resolution up to 1080p, up to 120 fps. Flipped sideways, the resolution becomes 1109×1945 pixels. There are also a couple of HDR variants available to enable further light enhancing functionalities.
  • FLIR’s Lepton is a relatively inexpensive IR sensor that can be used separately, or in conjunction with standard CMOS sensors to extract additional information from the image through so-called “image fusion”.
  • CCS. We at BitSim hope that sensor or module suppliers will adopt MIPI’s initiative CCS – Camera Command Set: https://mipi.org/specifications/camera-command-set. The idea is to quickly get started with a sensor with its basic functionality without specific SW drivers. A typical command set can be handle things like resolution, frame rate and exposure time, but also more advanced features such as autofocus and single or multiple HDR.

Adapter card
This is often a factor making things in the development project more complex with extremely small connectors that easily break, or become loose with a bad connection.

  • We have developed a dozen different sensor adapter cards that fit development cards from Xilinx, NXP and Technexion etc. for rapid prototyping. It is a lot to think about before these small adapter cards work well, as there are usually different types of cables, connectors and sizes needed.

Interface

  • 4K Video BitSim has implemented 4K @ 60 Video, i.e. HDMI from an FPGA. In this project, we divided the camera into two physically separated parts, Front end (Camera sensor) and Back end (processing unit) with Aurora in between, i.e. Xilinx high-speed series protocol.
  • MIPI CSI-2 We have continued the development of our own camera interface IP, which now supports FPGAs with built-in D-PHY IOs (which has the advantage that no external resistance networks or Meticom circuitry are needed), e.g. a Xilinx UltraScale+ / MPSoC.  Now you can get 2.5Gb / s per lane!Processing (Platforms & Algorithms)
    One alternative for the processing of the image chain is a combined CPU and FPGA circuit, e.g. Zynq / MPSoC with the possibility to process in C / C ++ and VHDL.
  • We have worked with Python,  C / C++ and the open image library Open CV to adapt the contents of an image. With Xilinx Vision (HLS Video Library), it is also possible to use hardware accelerated OpenCV calls.
  • Another alternative is to process in a SoC circuit, i.e. with an ARM CPU, software and built-in fixed accelerators. NXP (formerly Freescale) has had great success with the i.MX6 family. The next generation, i.MX8, has been  available for a couple  of years. We have been working with the i.MX8 for a little more a year,  and we now experience that NXP’s libraries, documentation and forums are starting to become really useful.
  • We have a complete video chain, i.e. from glass-to-glass (sensor to screen), via MIPI CSI-2, V4L and Gstreamer with H.264 compression, via Ethernet to the screen.

Please contact us if you are interested or have questions!

 

 

 

CERN

Machine learning collaboration at CERN

CERN, the research faciility in Switzerland and Zenuity, a new ADAS and AD Software Company owned by Volvo and Autoliv, have announced a collaboration on machine learning based on hardware acceleration. This is exactly the area we at BitSims are exploring with our new platform Spiderpig. The idea is utilize existing libraries and the Python language to quickly develop areas such as advanced object recognition and machine learning applications. See here

This announcment from CERN and Zenuity underscores the opportunities we at BitSim see in acceleration of machine learning in hardware. Read further

CERN

 

Embedded Conference Scandinavia 2019

Embedded Conference Scandinavia

Meet us at Embedded Conference Scandinavia – Europe´s largest embedded conference November 5-6 2019 , Kistamässan both number 36 in Stockholm.
Come and discuss Camera sensors and interfaces (MIPI CSI-2), edge computing and accelerated image processing, and snakes.

Presentation – How to accelerate the development of your embedded visions system.