Audio Digital Delay with DRAM and Arduino

Aka “ADDDA” or “AuDiDeDrAr” or “aww dee dee drawer” or “A3DA”

I’ve had this idea bouncing around in my head that you could use 1-bit wide DRAM as a delay line if you simply counted up through it’s addresses, reading and writing as you go. 1-bit wide DRAM like the M3764 have separate pins for Data In and Data Out which makes the read-and-write method easier.

The light bulb moment was coming across an old post on diystompboxes.com where one commenter provides a short snippet of code to do a Delta-Sigma analog to digital converter using the Arduino’s analog comparator pins. I had planned to do this purely in software by using the normal ADC pins and then calculating the Delta myself. But the built-in comparator makes this dead simple!

You can just see the OKI DRAM chip under all those wires.

So armed with an easy way to generate a one bit wide data stream from an analog signal, I went about hooking up the DRAM chip to a clone Arduino Pro Mini. There are quite a few “test a DRAM chip with an Arduino” projects out there, but the datasheet for the OKI chip has good timing diagrams that give the jist of what you need to do. DRAM has a shared set of address pins for the row and column you’re selecting, which you can think of as two halves of the full address. To get those halves in, you put the row on the address lines and strobe the /RAS pin. Then you put the column on the address lines and strobe the /CAS pin. Then your data is on the Data Out pin. Writing involves putting the data you want to write on the Data In pin and strobing the /WE pin after you strobe the /CAS pin. That’s really all there is to it. You’ll see that there are some short cuts you can take to speed up access, like only setting the row once for any number of columns you’d like to access. You can also read the Data Out pin right before writing new Data In. I do both of these in my implementation to increase the sample rate.

To start out, I did everything very simply, using the built-in Arduino functions to make it easy to read and understand. I then measured the performance of the main loop by looking at the Sigma-Delta signal’s minimum change period. This gave me a base line, and then I systematically went through the code swapping out the built-in functions for faster implementations one by one, measuring any increase in performance. If a change didn’t lead to any improvement, I wouldn’t commit it. Instead I’d commit a comment that I’d tried it. In retrospect, it’d have been better to use git revert so I had a better history of what I specifically tried.

Here I am demonstrating the delay at maximum delay length. I add some feedback to make it act like a reverb towards the end.

Doing this I was able to improve the performance of the DRAM access by a factor of about 16. The original version took 8 seconds to cycle through the entire memory and the final version took about 500ms. My commits show the improvements, although I realized later I was measuring the wrong signal! It was at least indicative of the improvements. All of the timing in this project has a lot of jitter due to the many different possible code paths with no attempt to balance them out.

In the end, the DRAM /WE pin is the best measure of how often you’re writing to the DRAM. It is at about 139 khz. I measured the actual audio delay produced by the system using my oscilloscope and it is about 480 ms at it’s longest. Those two numbers agree:

1 second      | 64*1024 samples           seconds
--------------|----------------- = 0.471 ---------
139 k samples | 1 buffer                  buffer

I’m new to working directly with delta-sigma converters, and after reading a few pages about it this morning, I’m not sure what I’ve built is exactly a delta-sigma converter at all!

My current understanding is that 139 khz sampling rate (F) means a Nyquist frequency (F/2) of 69.5 khz regardless of the type of converter used. I found a paper by Aziz, Sorensen, and Van der Spiegel from 1996 describing how delta-sigma converters work, and it gives some equations.

Letting the oversampling ratio, f_s / (2 * f_b) = 2^r …

[Therefore] every doubling of the oversampling ratio i.e., for every increment in r, the SNR improves by 9 dB, or equivalently, the resolution improves by 1.5 bits.

Aziz, P., Sorensen, H., & Vn Der Spiegel, J. (1996). An overview of sigma-delta converters. IEEE Signal Processing Magazine, 13(1), 61–84. https://doi.org/10.1109/79.482138
f_s -> sampling frequency
f_b -> signal bandwidth
r -> oversampling

f_s / ( 2 * f_b) = 2^r

f_s / 2^r = 2 * f_b
f_s / (2 * 2^r) = f_b
f_s / 2^(r+1) = f_b

139e3 / 2^(r+1) = f_b
r = 0, 139e3 /  2 = 69.5 khz,  Nyquist sampling
r = 1, 139e3 /  4 = 34.7 khz, +1.5 bits
r = 2, 139e3 /  8 = 17.3 khz, +3.0 bits
r = 3, 139e3 / 16 =  8.6 khz, +4.5 bits
r = 4, 139e3 / 32 =  4.3 khz, +6.0 bits
r = 5, 139e3 / 64 =  2.2 khz, +7.5 bits

Those numbers would suggest a fairly lofi device. And certainly what I have running on my desktop is by no means producing quality audio. But it also doesn’t sound that bad? I’m losing a bit in my calculations because I should be counting the “single bit” of the comparator. If we take that bit and then work it backwards…

f_s / (2 * f_b) = 2^r
total_bits = 1.5 * r + 1
8 bits = 1.5 * r + 1
r = 7 / 1.5 = 4.667
139e3 / 2^4.667 = 139e3 / 25.398 = 5.47 khz

So the system is operating at 8 bits up to a bandwidth of 5.47 khz. That sounds about right. What happens if I add more features and reduce the sampling rate to 100 khz?

100e3 / 25.398 = 3.94 khz

What happens if I find some optimizations and increase the sampling rate to 200 khz?

200e3 / 25.398 = 7.87 khz

Someone check my math.

I vary the delay length from max to about minimum, then set it somewhere in the middle. The reverb feedback is still applied because that makes it easier to hear the changes in delay length.

National Semiconductor 4510 Mathematician

I have a small collection of vintage calculators that I stumbled into collecting. I found one at a garage sale, and then one was given to me, then I found a neat one on eBay for a good price… Before I knew it, I was a calculator collector.

I actually use most of them despite having a great calculator app on my phone because I prefer their physical interfaces. I have one on each desk and one in my bag so I don’t have to go searching. I don’t have that many bags and desks though so there is also a small stash in a drawer.

The brown and tan color scheme is very 70s. I think they’d have used wood grain print adhesive vinyl if they could have.

My latest addition is a National Semiconductor 4510 Mathematician from the mid 70s. It has an 8 digit red LED display and runs on a 9 volt battery. There is a jack on the top edge for connecting a wall supply if you’ve got a lot of math to do.

It is in great condition and the seller even included a brand new battery. It is one of the lesser RPN calculators of the 70s and not expensive. Like most of my collection, is not valuable but it is uncommon.

This model isn’t programmable although they made a version that was called the Mathematician PR. Those are a little bit more rare but their programmability is so limited that I didn’t want to deal with finding a nice one at a price I wanted to pay. I also know that personally I do not use the programmability of much nicer calculators I already have so it wasn’t something I’d use anyhow.

What makes this model stand out is its RPN entry method. If you’re not familiar with RPN, there are some great introductions online. I tried to explain it recently and was told that it sounded insane. You get used to it! It starts to make sense… eventually.

This model’s main downside is that it doesn’t do scientific notation, so the range is limited. Some of the math I do most often is around calculating values for circuit components. They are specified in orders of one thousand units. So for example, resistors are commonly available in units of ohms, kilo ohms, and mega ohms. This means you do a lot of math with numbers involving 10^3 and 10^6. Capacitors are similar but much smaller units. You often deal with pico farads – that’s 10^-12. So I’ll have to keep track of the exponents myself when doing those kinds of calculations.

The NatSemi Mathematician is delightfully slow for some operations. For example computing a logarithm of a number is slow enough that the calculator displays an animation of sorts to show that it is “thinking”.

Computing the natural logarithm of pi takes long enough for you to wonder when you’d ever want to know such a value.

I don’t know how much I’ll use this addition to my collection. If I leave it where I can see it, I’ll use it occasionally if only as a muse for an earlier time in computing history.

From the notebook: Tape Transports

Todays notebook sketch is some ideas for building a tape “transport” – the mechanical bits that move the tape around in the right way and at the right tension.

Three transport configurations and my current thinking on a capstan.

I have a weird fascination with magnetic storage media and tape in particular. It was a key technology in computing for decades and it has more or less completely disappeared.

All that time in home, commercial, and industrial use has left lots of bits and bobs to experiment with, although it is very very quickly disappearing.

Other than the playback and record heads and the media itself, the devices can be recreated from scratch. (And let me get back to you on making heads and media…)

Prototype Game of Life Synth Module

Conway’s Game of Life (CGoL)has always fascinated me. It is probably the most well known of all cellular automata and also probably the most intuitive. Yet even simple patterns can turn into complex sequences of shapes, patterns, and noise.

Years ago, when learning about the HTML5 WebAudio API, I came across a fun little demo called Blips of Life by Mike M. Fleming. Use your mouse to draw some dots and then click the triangle Play icon in the bottom left. Great, right? I’ll let you play around with that for a while. Leave it running while you read, perhaps?

This is in 1U Eurorack format.

When it came time to start prototyping new modules for my modular synth, I was inspired to recreate Mike’s work in hardware. I didn’t have exactly the parts to fully recreate his Blips of Life, but using the parts I had in hand I made a prototype.

My version has only an 8×8 grid and only has a major pentatonic scale. The small grid means that there are fewer possible patterns, although not so few it is monotonous. The major pentatonic scale is fine. The largest problem with the prototype is that I used CircuitPython to write it, which has no interrupt support. I love Adafruit – they’re a great company and they design terrific boards. But removing interrupts from their fork of MicroPython has cut several projects short.

The prototype works pretty well and exposed a new design challenge: how do you deal with “games” that end in loops? They’re a subset of steady state patterns in CGoL – a pattern can go “extinct”, “steady”, or loop in a finite sequence. The first case is easy to detect and deal with. If all the cells of the grid are off, repopulate the board. You can detect a steady state by comparing the next board with the previous. If they’re identical, repopulate.

But loops can be any arbitrary length, and can step through rather complex patterns. The only way I know to detect them is to have a list of boards known to be part of or lead to a loop. I’ve got some ideas how to do that either via live loop detection or with a precomputed list of boards. As yet, the performance limitations of CircuitPython really prevent tackling it. I’ll need to reimplement the code in C++ using Arduino. Hats off to Adafruit for supporting both Python and Arduino on their boards.

Book recommendation: Turing’s Cathedral by George Dyson

If you’re interested in the early history of computing, check out Turing’s Cathedral by George Dyson. It covers an interesting middle phase between the original electronic digital computers and the wide commercialization of computers in the late 50s.

The cover of the book recalls a punch card.

Specifically it examines the people and development around “the IAS machine” at the Institute for Advanced Study at Princeton. Big and not as big names make an appearance, and it is a detailed account of the forces at play: academia, industrial, military, and political.

The design of the “IAS machine” was the pattern for dozens of machines around the world. More than one country’s “first computer” was one built using the design developed by the people at IAS. I think of it as the first practical computer – the construction needed to solve a lot of problems that the original electronic computers didn’t need to address because they were just struggling to exist.

I’m not going to lie: there are a lot of “white men in ties” involved.

It’s been a while since I finished the book, but I do refer to it when I need details of how some design constraint was surmounted. It also includes enough biographical information that I use it to jog my memory of exactly who was who. The world of computing was still small enough that people who contributed to the IAR project show up in other places pretty often.

It’s widely available. It looks like Thriftbooks has it for under $5, so you could get it for free if you’ve got some reward points there.

Ray Diagram: Now with Measurements

I’ve continued to work on the optical ray diagram tool prototype. I added a way to measure the effective focal length (EFL) of the lens system. It isn’t automatic, but by adjusting the parameters you can align an intersection at the optical axis and read off the EFL. Obviously this should be a one button click sort of thing, but it is kind of interesting to see how the various parameters affect EFL.

The UI is still very rough and the code is even worse. But I’ve actually been using it!

My main area of interest before going to automation is identifying and coding all the various measurements that you want of a lens. To identify these, I’ve been reading the excellent Applied Optics and Optical Engineering edited by Rudolf Kingslake. Chapter 3, Photographic Objectives, traces the history of the development of lenses from the development of photography to the book’s publication in the 1960s. I would recommend either starting there if you have some familiarity with optics already. If you’re new to optical design, start with Chapter 1, Lens Design. I got my copy from the public library but you can also borrow a digital copy from the Open Library on archive.org.

I realized I should change the example lens configuration in my prototype to a Cooke Triplet after reading Chapter 3. As the book points out, a lot of modern lens designs can be traced to or analyzed as variants of the Cooke Triplet. It is also unique in being only three elements but having performance that is good enough to warrant doing the work of designing one yourself. It is also non-trivial enough that you want an automated tool to design one, so it makes a good example.

The next step will be to add proper measurements of the various aberrations and distortions. I’ll be using worked examples from Applied Optics and Optical Engineering to check the calculations of my tool. The current default configuration is from this student project in MIT OpenCourseWare by Choi, Cooper, Ong, and Smith. I think I’ve already found a discrepancy in my results so my work is cut out for me.

Another source for a worked example is Dennis Taylor’s original patent from 1985. While Taylor invented it, the design is named after the company he was working for at the time – T. Cooke & Sons of York.