How do ultrasound imaging machines work ?

Thread Starter

DarthVolta

Joined Jan 27, 2015
521
This applies to imaging machines in general, like ultrasound, SONAR, RADAR, seismic imaging, etc.

I can imagine how the most basic of basic u.s. sensor might work, a pulse could be sent out, then a reflection comes back after some time, and at some intensity. So after some circuitry it must have been displayed on an oscilloscope as range on 1 axis, and intensity on the other, like early RADAR.

So how did imaging systems go from that, to being able to interpret something like say a block of wood on a table, and display it as it basically looks ? If u have an array of sensors, I guess a basic image could start to be resolved, like a compound eye.

I'm not considering at all how waves act and move in/out, through materials and what not.

I guess the miniaturization and data processing power is what allows a modern ultrasound machine to show images. I bet it takes a lot of signal processing classes, and then programming classes. Guess I'm anserwing the basics, but I'd like a full teardown video with a lot more explanation on SONAR imaging for example.

 

Alec_t

Joined Sep 17, 2013
14,280
Most imaging involves scanning a pulsed directional beam of radiation (radio, ultrasound) over some field to be imaged and sensing returned echoes. The directional beam may be generated by a single transmitter (antenna, ultrasound transducer) or by a phased array of transmitters. In order to resolve and image fine detail the transmitted wavelength has to be very short, hence the transmitted frequency has to be very high.
 

DickCappels

Joined Aug 21, 2008
10,152
The first ones that I became aware of were being manufactured as piecework in the late 1960's by my friend and later boss and his wife on their dining table. This was an inexpensive version of ultrasound units already on the market.

The output was audio to a headset and it was used mainly as a fetal heartbeat monitor. The processing was a simple AM mixer that used the outgoing carrier to modulate the reflected and amplified carrier, and the transmit transducer and the receive transducer were made by scoring a single ground quartz crystal wafer and breaking it in two so both transducers would be close to identical, if memory serves.
 

djsfantasi

Joined Apr 11, 2010
9,156
MIT was the early leader in object recognition in the 1960s, led by Dr. Seymour Papert. While today’s algorithms are significantly more complex and elegant, they build on this early research.

The sensors return a 3D set of points, where edge boundaries are identified by a differential comparison to their neighbors. Then, the relationship between intersecting boundaries are calculated. The set of such relationships define an object.

a simple 2D example is of a cube drawn on a piece of paper. You can see nine boundaries. These nine boundaries intersect with each other as follows:
  • 3 - L-shaped intersections
  • 3 - Arrow-shaped intersections
  • 1 - Y-shaped intersection
In simple cases, this set will define a cube or box.

Dr. Papert’s research formed the basis on object recognition for the future. I present it here, as a help to further understanding the question in your original post. I studied the MIT Artificial Intelligence Laboratory publications in the early 70s and developed my own software for object recognition based on these principles.
 
Last edited:

MisterBill2

Joined Jan 23, 2018
18,176
Ultrasound systems gain an image by using a whole lot of individual sensors, a lot like a solid-state camera with a flash, except with a short burst of ultrasonic energy and then getting back all of those echoes and through software being able to put them on the screen based on how long they took to return. A bit like sonar except for many more receive sensors . The big magic is done in the software.
 
Top