GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. If nothing happens, download GitHub Desktop and try again. If nothing happens, download Xcode and try again. If nothing happens, download the GitHub extension for Visual Studio and try again.
This is a hardware and software project to synchronize multiple RTL-SDR receivers and make it possible to use them for applications such as radio direction finding, passive radars, measuring equipment, radio astronomy interferometers and MIMO communications. A single This makes their sampling rates and local oscillator frequencies equal but doesn't guarantee that they would actually sample simultaneously or that their local oscillators would be in the same phase.
Both the local oscillator phase and sample time offset get a random value every time the dongles are initialized. This happens because commands don't arrive to all receivers at exactly the same time and because their frequency synthesizers are not known to provide a way to reset their phases. To handle this we have to find the time and phase offsets every time the receiver is used.
This is done by disconnecting receivers from their antennas and connecting them to a single white noise source. Cross correlating this noise finds these offsets and lets us correct them. Currently the signal is recorded in short blocks and each block starts with a burst of noise. The current hardware prototype has 3 dongles. One of them has the original Clock is distributed from one dongle to crystal pins of the two other dongles through small capacitors.
A better solution for a larger number of dongles would be to have a separate The inputs of the dongles are switched between antennas and noise source by SAD switches. They are controlled by an RC timing circuit triggered by I2C clock in the dongles.
This switches the inputs to the noise source and the timing circuit keeps them there for some time after the I2C traffic has finished. The idea is to have the noise burst triggered every time the center frequency is changed which should make fast scanning easier.
The current software is primarily written for radio direction finding based on phase difference between elements of an antenna array.
The software records a block of signal from each dongle, cross correlates the noise in beginning of each block to determine the phase and timing offsets, corrects for them, divides the signal in narrow frequency bins using FFT and calculates covariance between each receiver pair for each frequency bin.
Complex argument of an element in the covariance matrix represents the measured phase difference between two antennas and is used for direction finding. The software has mostly been developed in Debian Linux on ordinary x PCs but many other Linux distributions and architectures should work as well.
Click here to learn more.
Compared to other demodulation schemes, the system is quite simpler, versatile, and of lower cost. The performance of the demodulator is demonstrated by measuring the displacement per volt of a thin-film polymeric piezoelectric transducer based on polyvinylidene fluoride for ultrasonic applications. We measured displacements between 3. Roberto M. Veiras Appl. Riobo, F. Veiras, P. Sorichetti, and M.
Garea Appl. Connelly Appl. Massie, R. Nelson, and S. Holly Appl. Akira Kimachi Appl. You do not have subscription access to this journal. Citation lists with outbound citation links are available to subscribers only.
You may subscribe either as an OSA member, or as an authorized user of your institution.Interferometry is a family of techniques in which waves, usually electromagnetic wavesare superimposedcausing the phenomenon of interferencewhich is used to extract information. Interferometers are widely used in science and industry for the measurement of small displacements, refractive index changes and surface irregularities.
In most interferometers, light from a single source is split into two beams that travel in different optical pathswhich are then combined again to produce interference; however, under some circumstances, two incoherent sources can also be made to interfere.
In analytical science, interferometers are used to measure lengths and the shape of optical components with nanometer precision; they are the highest precision length measuring instruments existing. In Fourier transform spectroscopy they are used to analyze light containing features of absorption or emission associated with a substance or mixture. An astronomical interferometer consists of two or more separate telescopes that combine their signals, offering a resolution equivalent to that of a telescope of diameter equal to the largest separation between its individual elements.
Interferometry makes use of the principle of superposition to combine waves in a way that will cause the result of their combination to have some meaningful property that is diagnostic of the original state of the waves. This works because when two waves with the same frequency combine, the resulting intensity pattern is determined by the phase difference between the two waves—waves that are in phase will undergo constructive interference while waves that are out of phase will undergo destructive interference.
Waves which are not completely in phase nor completely out of phase will have an intermediate intensity pattern, which can be used to determine their relative phase difference. Most interferometers use light or some other form of electromagnetic wave. Typically see Fig. Each of these beams travels a different route, called a path, and they are recombined before arriving at a detector. The path difference, the difference in the distance traveled by each beam, creates a phase difference between them.
It is this introduced phase difference that creates the interference pattern between the initially identical waves. This could be a physical change in the path length itself or a change in the refractive index along the path. As seen in Fig. The characteristics of the interference pattern depend on the nature of the light source and the precise orientation of the mirrors and beam splitter. In Fig. If, as in Fig.
If S is an extended source rather than a point source as illustrated, the fringes of Fig. Use of white light will result in a pattern of colored fringes see Fig.
In homodyne detectionthe interference occurs between two beams at the same wavelength or carrier frequency. The phase difference between the two beams results in a change in the intensity of the light on the detector.
The resulting intensity of the light after mixing of these two beams is measured, or the pattern of interference fringes is viewed or recorded. The heterodyne technique is used for 1 shifting an input signal into a new frequency range as well as 2 amplifying a weak input signal assuming use of an active mixer.
A weak input signal of frequency f 1 is mixed with a strong reference frequency f 2 from a local oscillator LO. These new frequencies are called heterodynes. Typically only one of the new frequencies is desired, and the other signal is filtered out of the output of the mixer.Navigation Home Page extension site. Limitations of Amateur Radio Astronomy. Detection of Virgo A.
In the example shown here measurements were made in the MHz radio astronomy band actually at Two sets of double stacked Quagi antennas were set 30m apart and the signals combined in a ' Wilkinson Combiner' prior to being fed to the receiver. See below. The Crab Super-Nova Remnant SNR was used as a small diameter source to obtain the interferometer 'fringe pattern' during a transit scan.
Interferometric synthetic-aperture radar
This classic pattern can be seen in the diagram opposite. Crab Nebula SNR. This pattern can be translated into a polar diagram for the interferometer and is shown below. The individual 'synthesised' antenna beams are shown in blue. What follows is a very basic description of the principles of Aperture Synthesis using Interferometers. It is defined below. If we measure the fringe frequency at a wide range of baselines we can generate a fringe visibility graph by plotting values for gamma at each baseline.
With such a system it is possible to build up picture of the fringe visibility functions for a large number of angles across the source. See example of yellow trace in diagram opposite. See example below. Most modern radio telescopes based on interferometry use the rotation of the Earth as a means of sampling all the spacial frequency data over many angles quickly. The original idea was devised by Martin Ryle at Cambridge. As the Earth rotates the angle of the baseline to the source changes.
The 'projected' baseline length also changes. If there are many antennas as shown on the right of the diagram, many angles and baselines are changing at once and a great deal of data can be collected simultaneously. The Very Large array in New Mexico has 27 antennas.This geodetic method uses two or more synthetic aperture radar SAR images to generate maps of surface deformation or digital elevationusing differences in the phase of the waves returning to the satellite    or aircraft.
The technique can potentially measure millimetre-scale changes in deformation over spans of days to years. It has applications for geophysical monitoring of natural hazards, for example earthquakes, volcanoes and landslides, and in structural engineeringin particular monitoring of subsidence and structural stability.
Synthetic aperture radar SAR is a form of radar in which sophisticated processing of radar data is used to produce a very narrow effective beam.
It can be used to form images of relatively immobile targets; moving targets can be blurred or displaced in the formed images. SAR is a form of active remote sensing — the antenna transmits radiation that is reflected from the image area, as opposed to passive sensing, where the reflection is detected from ambient illumination.
SAR image acquisition is therefore independent of natural illumination and images can be taken at night. Radar uses electromagnetic radiation at microwave frequencies; the atmospheric absorption at typical radar wavelengths is very low, meaning observations are not prevented by cloud cover.
SAR makes use of the amplitude and the absolute phase of the return signal data. Since the outgoing wave is produced by the satellite, the phase is known, and can be compared to the phase of the return signal. The phase of the return wave depends on the distance to the ground, since the path length to the ground and back will consist of a number of whole wavelengths plus some fraction of a wavelength.
This is observable as a phase difference or phase shift in the returning wave. The total distance to the satellite i. In practice, the phase of the return signal is affected by several factors, which together can make the absolute phase return in any SAR data collection essentially arbitrary, with no correlation from pixel to pixel.
To get any useful information from the phase, some of these effects must be isolated and removed. Interferometry uses two images of the same area taken from the same position or, for topographic applications, slightly different positions and finds the difference in phase between them, producing an image known as an interferogram.
The most important factor affecting the phase is the interaction with the ground surface. The phase of the wave may change on reflectiondepending on the properties of the material. The reflected signal back from any one pixel is the summed contribution to the phase from many smaller 'targets' in that ground area, each with different dielectric properties and distances from the satellite, meaning the returned signal is arbitrary and completely uncorrelated with that from adjacent pixels.
Importantly though, it is consistent — provided nothing on the ground changes the contributions from each target should sum identically each time, and hence be removed from the interferogram. Once the ground effects have been removed, the major signal present in the interferogram is a contribution from orbital effects.
For interferometry to work, the satellites must be as close as possible to the same spatial position when the images are acquired. This means that images from two satellite platforms with different orbits cannot be compared, and for a given satellite data from the same orbital track must be used. In practice the perpendicular distance between them, known as the baselineis often known to within a few centimetres but can only be controlled on a scale of tens to hundreds of metres.
This slight difference causes a regular difference in phase that changes smoothly across the interferogram and can be modelled and removed. The slight difference in satellite position also alters the distortion caused by topographymeaning an extra phase difference is introduced by a stereoscopic effect. The longer the baseline, the smaller the topographic height needed to produce a fringe of phase change — known as the altitude of ambiguity.
This effect can be exploited to calculate the topographic height, and used to produce a digital elevation model DEM. If the height of the topography is already known, the topographic phase contribution can be calculated and removed.
This has traditionally been done in two ways. In the two-pass method, elevation data from an externally derived DEM is used in conjunction with the orbital information to calculate the phase contribution.
In the three-pass method two images acquired a short time apart are used to create an interferogram, which is assumed to have no deformation signal and therefore represent the topographic contribution.
This interferogram is then subtracted from a third image with a longer time separation to give the residual phase due to deformation.The drivers and userspace tools that made rtlsdr what it is today were created by the osmocom people. The dongles with an E tuner can range between MHz in my experience with a gap over MHz in general. The RT dongles use a 3.
All of the generic dongle antenna inputs are 75 Ohm impedance some SDR branded versions have 50 ohm input. The dynamic range for most dongles is around 45 dB. The sensitivity is somewhere around dBm typically. The highest safe sample rate is 2. For the data transfer mode USB 2 is required, 1.
The rtlsdr RTLU chips use a phased locked loop based synthesizer to produce the local oscillator required by the quadrature mixer. Datasheetsgeneral refs: But what is the Fourier Transform? From this a This resampled output can be up to 3. The minimum resampled output is 0.
Check this reddit thread for caveats and details. The actual output is interleaved; so one byte I, then one byte Q with no header or metadata timestamps. You'll almost certainly notice a stable spike around DC.
For general use SDR is probably the best application for windows with secondary mono-based linux and Mac support. Luckily there are Linux and OS X native binaries packages with all dependencies ie, gnuradio these days.
For doing diagnostic and low signal level work Linrad is full featured and fast. Those are for the DVB-T mode and not the debug mode that outputs raw samples. Linux 3. While the sampling bandwidth is only 2. With frequency hopping you can survey very large bandwidths. It starts very far zoomed out. It might load a bit slow too. This page is mostly just notes to myself on how to use rtlsdr's core applications, 3rd party stuff using librtlsdr and wrappers for it, and lots on using the gr-osmosdr source in GNU Radio and GNU Radio Companion.
This isn't a "blog", don't read it sequentially, just search for terms of interest or use the topics menu. These days for most people doing most things you want to get an dongle with an RT2 tuner. They'll come with MCX coaxial connectors. These work fine for most things.
There's photos of the E4ks up at the top of the page and of an RT based dongle in the "mini" format off to the left most minis do not have eeproms for device ID. Back in some of the cheaper dongles occasionally miss protection diodes but that is no longer an issue. F to MCX for the other style dongles. The default design has the tuner taking 75 Ohm so that's what they all are except SMA. Only three tuners are very desirable at this time. The tuners themselves are set up and retuned with I2C commands.
E tuners used to re-tune twice as fast as RT tuners, but this was fixed in keenerd's experimental branch where RT actually tune a tiny bit faster than the E4Ks. These changes were later adopted by the main rtlsdr. But that was the old days when rtlsdr sticks re-tuned relatively slowly.Privacy Terms. Quick links. BladeRF astronomical interferometer Working on something interesting? Share it with the community! If you follow the links provided below, you can see some of the data from work that I've done at 60 MHz and MHz.
I haven't tried out the various approaches that are emerging to synchronize the data streams from multiple BladeRF's yet, but I am very keen to use 1 PPS synchronization. There are a few kinks in my set up that occasionally leads to dropped samples, but most of the time, I am able to record data at 15 MSPS simultaneously from three radios.
If anyone else is doing any work using BladeRFs for interferometry, I would be glad to hear about your experiences. I know you're waiting for the synchronized RX functionality, which I'll be looking to merge soon. There's a branch and some associated documentation here that I'll look to finish reviewing and testing. Cheers, Jon. I'm still a ways off from being able to correlate signals in real-time, so I am content to process everything after recording right now.
As I re-read this, I articulated things poorly. Just wanted to clarify things, as not to confuse other people as well. The purpose of this is to minimize frequency error and to allow people to re-calibrate their device. What I feebly tried to articulate was that this conflicts with the aforementioned pin assignment, since these two things were developed in parallel; we will change this pin when we merge this trigger-based.
However, I will talk to the guys about what it would take to get this into the queue. Hopefully that sheds a bit more light on what's to come. I'll try to touch base with you once it's all ready, and will be looking forward to future developments in your projects!
I assume that a master radio could send a pulse to slave radios as an alternative, then no extra hardware would be needed. I think it would be straightforward to characterize the delay and use that for synchronization for those of us that do not need higher levels of frequency accuracy. I'll be in touch once we have that merged!
A grc flowgraph and a Python script communicate over a TCP socket now, so data doesn't need to be stacked up on big solid state drives and post-processed. See the link below. Once the sample stream synchronization is incorporated into the BladeRF master code, I will ready to do sustained astronomical interferometry observing.
I still need to work on porting the original writeup by Jan to a stand-alone document, but for the time being I've added a very simple example here. I hope you're still up for working on your interferometer, as I know I'll be excited to see your results!
Best regards, Jon. I'll have FPGA v0. I've got three radios that need to be synchronized and I think I should be able to figure it out. Ultimately, I would like to be able to execute the triggering from within a GnuRadio flowgraph.
I will keep everyone posted on my implementation. Thanks, David. Thank you for sharing your project with everyone here! I need to think about how to expose this functionality in gr-osmosdr so that you could access it from within a flowgraph. Let me know if you have any questions! Should have some results soon. Let me know if you have any questions or need clarification on anything.