Creating a Financial Foundation for Shared Infrastructure

Over a decade ago, I was one of the founding members of the Dallas Makerspace. My major contribution was designing the financial models that allowed the group to have a solid financial footing for renting it’s first dedicated space.

The other founders were more involved in all the growing pains of starting an organization like that, and I moved to another city and didn’t lift those boulders. But (as far as I know) the original membership models kept the group bootstrapped long enough to attract more members and grow into the organization they are today.

A member of the ThePrepared Slack recently asked how I did this, and in retelling the tale, I realized that I’d never written down the methods I used. I think sharing them here might be helpful to other people looking to start either their own hackerspace, makerspace, or other opt-in, volunteer-driven group that seeks to have a single costly piece of shared infrastructure.

The Problem, Or What Not To Do

First let me lay out the problem. A volunteer organization starts with zero money. It can ask for donations and have some non-zero value of money, and then they can spend that money on projects. This model works fine if the projects are less frequent than how often you can ask people for money. If the organization wants to rent a space, they will now have a monthly operating cost that extends into infinity. There is no time when you’ll have raised enough money to pay for all the rent forever. You can only raise enough money for some number of months. You can think of each months rent as a “monthly project” you need to raise money for. If you organization has a regular meeting once a week, that means you will be either about to ask for money, asking for money, or telling people how much money you raised three out of the four weeks of the month. A primary task of the volunteers who have donated their time to keep the organization running will be to figure out how to collect enough money each month.

Suffice to say, unless your organization is a group of people who love to ask other folks for money, this will not be an activity that is long term sustainable by volunteers. They did not join the ranks of your group to run around asking folks for money.

A Better Way

So you need a different model. Enter “membership subscriptions”. To be part of the club, you have to pay some amount. It should be automatic, like a PayPal subscription, so there is almost zero overhead on the volunteer leadership. It should be automatic so that your members don’t have to think about it. It should be monthly because your costs are monthly and matching those two time periods up is simplest and the least amount of work.

You will now need to do a solid amount of research to understand what your real monthly costs will be. There will be rent. There will be tool upkeep. There will be consumable supplies like toilet paper or sodas or paint or whatever it is your group needs. You will want to understand what kind of pad you need each month to save for annual costs or unforeseen problems. You probably also want to budget in some saving each month towards improving your group’s shared resources – eventually buying that laser cutter, for example. Or saving up for the down payment on a bigger space. You may also want to save up for a fund for member scholarships, or sponsored members, or paying for invited speakers.

After you know the monthly budget, you now have a sliding tradeoff between how much each person in the group will pay each month and how many people are in the group. The extremes are easy: If you need $1000 a month, you could have 1000 people give $1 or one person give $1000. But neither of those are likely, so you’ll be somewhere in the middle. Is 100 people who give $10 a month possible? What about four people who give $250? What about 33 people who give $30 a month? You can imagine situations where any of these could be the most appropriate case – it really depends on your group and what it is doing. My experience would suggest that you’ll have easier luck attracting fewer people that give more than having to find many people who give less, but you know your group better than I.

If you don’t, now is a good time to start going to the group members and finding out what sort of monthly contribution they’d be comfortable with. Have honest talks with people and get to a real value. It might be lower than you’d like, but it is best to get something people will actually commit to. In this day and age, folks have a lot of subscriptions running. When I was doing this, it was much less common. People will know what they are willing to contribute.

So now you can build a membership model. You’ll have a set rate of monthly contribution per people, and you can then find how many people you’ll need contributing. If you already have that many people, you’re finished! Congratulations! Chances are, you don’t have that many people, and so your new task is to attract enough people to your group who are willing to contribute. Even if you do, I recommend the following steps because it will cement a solid group of “founders” who are dedicated to the project.

How To Do It

The advice could end here and be pretty straight forward. I basically described “how to do division”. But there is a key strategy that you should use.

First, go around to all the members and present the model. Show them the spreadsheet. Share copies with them so they can tinker with it if they’d like. Make sure to answer all the questions on the different monthly costs you put in there. You’ll get to explain to them how much insurance costs, probably. They should check your work.

Second, start collecting monthly subscriptions now. Maybe not everyone will be enthused to contribute to a shared resource that doesn’t even exist yet. But you need to bootstrap your finances. Your group should be meeting regularly as if they actually had the shared resource. If you’re trying to find a permanent space, keep meeting at the temporary spaces. It will be a key time to bring everyone together and say something like “We are meeting here now, but according to the model, we’ll have our own space in a few months!” It helps people understand that the project is succeeding.

Third, do a “founders fundraising”. There will be some members that can spare a little extra money to kick start the project. Maybe they’re deep pocketed or super committed. I suggest asking for a round of three months worth of contributions. This is really only two months, because they should be contributing their monthly amount already. You won’t ever do this again – it is a one-time deal. In effect what it does is pay for some members that you haven’t attracted yet. It should be uniform – don’t have different tiers. Don’t fall for the trap of having one super-donor. You want there to be a sense of shared ownership in the group, not one person that gets an outsized say because they donated more. These founders are the committed folks and they’ll be the core of the volunteers that keep the project going in it’s infancy. They need to be on even footing, because a lot of them will be putting a lot of time volunteering for various tasks that need to be completed. folks that join later, but before you actually have the shared resource, could also join as founders if that makes sense for what you’re doing.

At this point you’ll be able to plot a chart of membership growth that shows when your monthly contributions will match your planned monthly expenses. You’re still out there gathering members right? Well, as long as your numbers grow or stay steady, that crossover point gets closer and closer. Meanwhile, you’re collecting money to build a reservoir to deal with folks coming and going.

Possible Outcomes

There are three possible outcomes: your contributing membership keeps growing, flattens outs, or starts decreasing.

Increasing membership

If the contributing membership keeps increasing, then you’ll quickly reach your break even point and be able to buy whatever shared resource you were trying to buy. You’ll be solidly able to hit monthly expenditure targets and will probably even start to grow a surplus. The group can use that surplus to improve the shared resources or buy new ones. It can use that to sponsor scholarships for new members. Figuring out what your group will do with it’s surplus is a great problem to have. I strongly caution against lowering the membership contribution level. This will upset previous folks who already were contributing at a higher amount. It also means you need to go back to the drawing board on what people are willing to contribute. You’d rather begin with a lower contribution than a high one that gets lowered later.

Flat membership

If you can only keep the same number of people contributing, or you lose people at the same rate that you are gaining people, you aren’t in that bad of a situation. Since you are collecting each month, eventually you’ll simply save up enough money to pay for your goal. The degenerate case here is that you’re the only one contributing and eventually you just save up enough to do whatever it is you’re trying to do. Earlier I said that you need to have a monthly contribution rate that matches your monthly contribution spend. That isn’t technically true if you’re doing something like a yearly lease. You’ll save up enough money to have a whole year’s worth of lease – it could take longer than a year depending on how many founders you had. You’ll then be able to sign that lease in a responsible way knowing that your group has saved up enough money to cover all the costs till the end of the lease contract. You’re making a bet here that by actually having the shared resource, you’ll be able to attract more members in the upcoming year. Attracting new members will be an important part of the group’s activities that first year if it wants to continue having that shared resource for the next year. But if it can’t do it, maybe it just wasn’t meant to be. You’ll have a good run of a year, and honestly that’s pretty great.

Declining membership

This is the failure mode. I would seriously reconsider the nature of your group. Are there toxic members driving away others? Are the membership rates incorrect? Is the shared infrastructure just not in demand enough? Something has gone wrong. I can’t tell you what, but signs don’t look good.

All is not lost. If you can keep a core group of folks to keep the dream alive, eventually you’ll build up enough funds for your group to get that laser cutter or storage unit or taco truck. Once the group has access to it, hopefully you can use whatever it is to attract enough people to get your membership numbers back up.

Conclusion

Hopefully this is a helpful guide. You and your founding team will have a lot of work to do, and if they’re volunteers, that’s a whole other resource to manage. But hopefully you’ve got a growing group of interested and people and a cool piece of shared infrastructure you can all rally around.

Timing concerns of delay line style memory

Circuit diagram of a delay line style memory system.
Delay line memory simulation. 32 bits are stored in four 1-byte addresses.

I was getting bent out of shape that I needed to somehow reconstruct the system clock out of the data stored inside a delay line. But fooling around with an old discrete delay line simulation in a circuit simulator by replacing the giant stack of flip flops with a proper length delay line shows that I don’t need to be too concerned

As long as

  1. The delay line stays a constant delay length in terms of time (dubious)
  2. The system clock stays a constant speed (actually very easy because amazingly stable crystal oscillators are trivial nowadays)

Delay lines varying in time is

  1. Highly probable with any sort of rotating media (magnetic drums, etc)
  2. Less likely for “solid state” delay lines such as acoustic torsion delay wire.
  3. Unknown for any other technology (tape loops?)

For rotating media or tape loops, it would probably be good to assume they require a second timing track. It’s somewhat of a “waste” of media density, but you should only need one per media – so one track on a drum or one track on the tape. You can have as many other tracks as you can cram on there.

For the more stable media, where the main drift is due to temperature, there could be some type of calibration mode where a signal is put into the delay line and then compared to the current clock speed. The clock speed could then be adjusted to match. This could even be automatic – perhaps something you would perform once on startup, and then once again when the machine is up to operating temperature. Of course any thing that is temperature dependent is probably best handled by installing a heater and keeping it at a steady 100degF (or whatever) no matter what.

Audio Digital Delay with DRAM and Arduino

Aka “ADDDA” or “AuDiDeDrAr” or “aww dee dee drawer” or “A3DA”

I’ve had this idea bouncing around in my head that you could use 1-bit wide DRAM as a delay line if you simply counted up through it’s addresses, reading and writing as you go. 1-bit wide DRAM like the M3764 have separate pins for Data In and Data Out which makes the read-and-write method easier.

The light bulb moment was coming across an old post on diystompboxes.com where one commenter provides a short snippet of code to do a Delta-Sigma analog to digital converter using the Arduino’s analog comparator pins. I had planned to do this purely in software by using the normal ADC pins and then calculating the Delta myself. But the built-in comparator makes this dead simple!

You can just see the OKI DRAM chip under all those wires.

So armed with an easy way to generate a one bit wide data stream from an analog signal, I went about hooking up the DRAM chip to a clone Arduino Pro Mini. There are quite a few “test a DRAM chip with an Arduino” projects out there, but the datasheet for the OKI chip has good timing diagrams that give the jist of what you need to do. DRAM has a shared set of address pins for the row and column you’re selecting, which you can think of as two halves of the full address. To get those halves in, you put the row on the address lines and strobe the /RAS pin. Then you put the column on the address lines and strobe the /CAS pin. Then your data is on the Data Out pin. Writing involves putting the data you want to write on the Data In pin and strobing the /WE pin after you strobe the /CAS pin. That’s really all there is to it. You’ll see that there are some short cuts you can take to speed up access, like only setting the row once for any number of columns you’d like to access. You can also read the Data Out pin right before writing new Data In. I do both of these in my implementation to increase the sample rate.

To start out, I did everything very simply, using the built-in Arduino functions to make it easy to read and understand. I then measured the performance of the main loop by looking at the Sigma-Delta signal’s minimum change period. This gave me a base line, and then I systematically went through the code swapping out the built-in functions for faster implementations one by one, measuring any increase in performance. If a change didn’t lead to any improvement, I wouldn’t commit it. Instead I’d commit a comment that I’d tried it. In retrospect, it’d have been better to use git revert so I had a better history of what I specifically tried.

Here I am demonstrating the delay at maximum delay length. I add some feedback to make it act like a reverb towards the end.

Doing this I was able to improve the performance of the DRAM access by a factor of about 16. The original version took 8 seconds to cycle through the entire memory and the final version took about 500ms. My commits show the improvements, although I realized later I was measuring the wrong signal! It was at least indicative of the improvements. All of the timing in this project has a lot of jitter due to the many different possible code paths with no attempt to balance them out.

In the end, the DRAM /WE pin is the best measure of how often you’re writing to the DRAM. It is at about 139 khz. I measured the actual audio delay produced by the system using my oscilloscope and it is about 480 ms at it’s longest. Those two numbers agree:

1 second      | 64*1024 samples           seconds
--------------|----------------- = 0.471 ---------
139 k samples | 1 buffer                  buffer

I’m new to working directly with delta-sigma converters, and after reading a few pages about it this morning, I’m not sure what I’ve built is exactly a delta-sigma converter at all!

My current understanding is that 139 khz sampling rate (F) means a Nyquist frequency (F/2) of 69.5 khz regardless of the type of converter used. I found a paper by Aziz, Sorensen, and Van der Spiegel from 1996 describing how delta-sigma converters work, and it gives some equations.

Letting the oversampling ratio, f_s / (2 * f_b) = 2^r …

[Therefore] every doubling of the oversampling ratio i.e., for every increment in r, the SNR improves by 9 dB, or equivalently, the resolution improves by 1.5 bits.

Aziz, P., Sorensen, H., & Vn Der Spiegel, J. (1996). An overview of sigma-delta converters. IEEE Signal Processing Magazine, 13(1), 61–84. https://doi.org/10.1109/79.482138
f_s -> sampling frequency
f_b -> signal bandwidth
r -> oversampling

f_s / ( 2 * f_b) = 2^r

f_s / 2^r = 2 * f_b
f_s / (2 * 2^r) = f_b
f_s / 2^(r+1) = f_b

139e3 / 2^(r+1) = f_b
r = 0, 139e3 /  2 = 69.5 khz,  Nyquist sampling
r = 1, 139e3 /  4 = 34.7 khz, +1.5 bits
r = 2, 139e3 /  8 = 17.3 khz, +3.0 bits
r = 3, 139e3 / 16 =  8.6 khz, +4.5 bits
r = 4, 139e3 / 32 =  4.3 khz, +6.0 bits
r = 5, 139e3 / 64 =  2.2 khz, +7.5 bits

Those numbers would suggest a fairly lofi device. And certainly what I have running on my desktop is by no means producing quality audio. But it also doesn’t sound that bad? I’m losing a bit in my calculations because I should be counting the “single bit” of the comparator. If we take that bit and then work it backwards…

f_s / (2 * f_b) = 2^r
total_bits = 1.5 * r + 1
8 bits = 1.5 * r + 1
r = 7 / 1.5 = 4.667
139e3 / 2^4.667 = 139e3 / 25.398 = 5.47 khz

So the system is operating at 8 bits up to a bandwidth of 5.47 khz. That sounds about right. What happens if I add more features and reduce the sampling rate to 100 khz?

100e3 / 25.398 = 3.94 khz

What happens if I find some optimizations and increase the sampling rate to 200 khz?

200e3 / 25.398 = 7.87 khz

Someone check my math.

I vary the delay length from max to about minimum, then set it somewhere in the middle. The reverb feedback is still applied because that makes it easier to hear the changes in delay length.

National Semiconductor 4510 Mathematician

I have a small collection of vintage calculators that I stumbled into collecting. I found one at a garage sale, and then one was given to me, then I found a neat one on eBay for a good price… Before I knew it, I was a calculator collector.

I actually use most of them despite having a great calculator app on my phone because I prefer their physical interfaces. I have one on each desk and one in my bag so I don’t have to go searching. I don’t have that many bags and desks though so there is also a small stash in a drawer.

The brown and tan color scheme is very 70s. I think they’d have used wood grain print adhesive vinyl if they could have.

My latest addition is a National Semiconductor 4510 Mathematician from the mid 70s. It has an 8 digit red LED display and runs on a 9 volt battery. There is a jack on the top edge for connecting a wall supply if you’ve got a lot of math to do.

It is in great condition and the seller even included a brand new battery. It is one of the lesser RPN calculators of the 70s and not expensive. Like most of my collection, is not valuable but it is uncommon.

This model isn’t programmable although they made a version that was called the Mathematician PR. Those are a little bit more rare but their programmability is so limited that I didn’t want to deal with finding a nice one at a price I wanted to pay. I also know that personally I do not use the programmability of much nicer calculators I already have so it wasn’t something I’d use anyhow.

What makes this model stand out is its RPN entry method. If you’re not familiar with RPN, there are some great introductions online. I tried to explain it recently and was told that it sounded insane. You get used to it! It starts to make sense… eventually.

This model’s main downside is that it doesn’t do scientific notation, so the range is limited. Some of the math I do most often is around calculating values for circuit components. They are specified in orders of one thousand units. So for example, resistors are commonly available in units of ohms, kilo ohms, and mega ohms. This means you do a lot of math with numbers involving 10^3 and 10^6. Capacitors are similar but much smaller units. You often deal with pico farads – that’s 10^-12. So I’ll have to keep track of the exponents myself when doing those kinds of calculations.

The NatSemi Mathematician is delightfully slow for some operations. For example computing a logarithm of a number is slow enough that the calculator displays an animation of sorts to show that it is “thinking”.

Computing the natural logarithm of pi takes long enough for you to wonder when you’d ever want to know such a value.

I don’t know how much I’ll use this addition to my collection. If I leave it where I can see it, I’ll use it occasionally if only as a muse for an earlier time in computing history.

From the notebook: Tape Transports

Todays notebook sketch is some ideas for building a tape “transport” – the mechanical bits that move the tape around in the right way and at the right tension.

Three transport configurations and my current thinking on a capstan.

I have a weird fascination with magnetic storage media and tape in particular. It was a key technology in computing for decades and it has more or less completely disappeared.

All that time in home, commercial, and industrial use has left lots of bits and bobs to experiment with, although it is very very quickly disappearing.

Other than the playback and record heads and the media itself, the devices can be recreated from scratch. (And let me get back to you on making heads and media…)

Prototype Game of Life Synth Module

Conway’s Game of Life (CGoL)has always fascinated me. It is probably the most well known of all cellular automata and also probably the most intuitive. Yet even simple patterns can turn into complex sequences of shapes, patterns, and noise.

Years ago, when learning about the HTML5 WebAudio API, I came across a fun little demo called Blips of Life by Mike M. Fleming. Use your mouse to draw some dots and then click the triangle Play icon in the bottom left. Great, right? I’ll let you play around with that for a while. Leave it running while you read, perhaps?

This is in 1U Eurorack format.

When it came time to start prototyping new modules for my modular synth, I was inspired to recreate Mike’s work in hardware. I didn’t have exactly the parts to fully recreate his Blips of Life, but using the parts I had in hand I made a prototype.

My version has only an 8×8 grid and only has a major pentatonic scale. The small grid means that there are fewer possible patterns, although not so few it is monotonous. The major pentatonic scale is fine. The largest problem with the prototype is that I used CircuitPython to write it, which has no interrupt support. I love Adafruit – they’re a great company and they design terrific boards. But removing interrupts from their fork of MicroPython has cut several projects short.

The prototype works pretty well and exposed a new design challenge: how do you deal with “games” that end in loops? They’re a subset of steady state patterns in CGoL – a pattern can go “extinct”, “steady”, or loop in a finite sequence. The first case is easy to detect and deal with. If all the cells of the grid are off, repopulate the board. You can detect a steady state by comparing the next board with the previous. If they’re identical, repopulate.

But loops can be any arbitrary length, and can step through rather complex patterns. The only way I know to detect them is to have a list of boards known to be part of or lead to a loop. I’ve got some ideas how to do that either via live loop detection or with a precomputed list of boards. As yet, the performance limitations of CircuitPython really prevent tackling it. I’ll need to reimplement the code in C++ using Arduino. Hats off to Adafruit for supporting both Python and Arduino on their boards.

Ray Diagram: Now with Measurements

I’ve continued to work on the optical ray diagram tool prototype. I added a way to measure the effective focal length (EFL) of the lens system. It isn’t automatic, but by adjusting the parameters you can align an intersection at the optical axis and read off the EFL. Obviously this should be a one button click sort of thing, but it is kind of interesting to see how the various parameters affect EFL.

The UI is still very rough and the code is even worse. But I’ve actually been using it!

My main area of interest before going to automation is identifying and coding all the various measurements that you want of a lens. To identify these, I’ve been reading the excellent Applied Optics and Optical Engineering edited by Rudolf Kingslake. Chapter 3, Photographic Objectives, traces the history of the development of lenses from the development of photography to the book’s publication in the 1960s. I would recommend either starting there if you have some familiarity with optics already. If you’re new to optical design, start with Chapter 1, Lens Design. I got my copy from the public library but you can also borrow a digital copy from the Open Library on archive.org.

I realized I should change the example lens configuration in my prototype to a Cooke Triplet after reading Chapter 3. As the book points out, a lot of modern lens designs can be traced to or analyzed as variants of the Cooke Triplet. It is also unique in being only three elements but having performance that is good enough to warrant doing the work of designing one yourself. It is also non-trivial enough that you want an automated tool to design one, so it makes a good example.

The next step will be to add proper measurements of the various aberrations and distortions. I’ll be using worked examples from Applied Optics and Optical Engineering to check the calculations of my tool. The current default configuration is from this student project in MIT OpenCourseWare by Choi, Cooper, Ong, and Smith. I think I’ve already found a discrepancy in my results so my work is cut out for me.

Another source for a worked example is Dennis Taylor’s original patent from 1985. While Taylor invented it, the design is named after the company he was working for at the time – T. Cooke & Sons of York.

Early 90s Camera Teardown

Over the weekend I tore down an old RCA Super8 camcorder. It came with a power supply but it had already been damaged in the past. The viewfinder showed text over static and the tape mechanism just made a horrible squealing sound.

The good stuff – varifocal zoom mechanism and a teeny tiny CRT

I was interested in perhaps using the imager somehow but it seems to be damaged and not worth chasing down. The autofocus and motorized zoom work great though, so I’m hoping to use those paired with a Raspberry Pi camera. Even if I don’t ever use the motorized features, they’re manually adjustable so that will make for a nice setup.

Most of the parts that I didn’t use.

The other electronic parts of the camera are a bit too specific to be useful. I’m hoping to reuse some of the mechanics of the tape transport in my 1/4” audio tape experiments. 8mm is larger than 1/4” (6.35mm) so I think some of the various guides and rubber pinch rollers will come in handy.

Before I send all the extra parts to the electronics recycler, I need to plug it all back together and document the connector pin outs.

Prototype Eurorack Frequency Modulation Synth Module

I have a few prototype Eurorack modular synth modules in the works. I tend to get them working well enough to be musically interesting and then move to work on the next prototype. It’s not because I don’t plan on finishing them – it’s more that all the biggest questions are answered and I want to move on to the next prototype and answer whatever questions it is trying to answer.

The prototype laid out

This module is based around the Yamaha YM3812 chip, also known as the OPL2. You might know it as part of the capabilities of the AdLib and original Sound Blaster sound cards. Think of the classic sound of the Doom soundtrack – that’s coming out of a YM3812 (emulated or otherwise).

But you can do a lot more than what you hear on the Doom soundtrack – even though I’d be fine if that was the limit of it’s sonic capabilities. FM synthesis is “weird” in the way it can produce wild sounds that are very hard to produce with subtractive synthesis. As an added bonus, the YM3812 has multiple symmetrical channels and is this capable of impressive polyphony. One module isn’t just one voice – it’s 6. Also it has a drum synth mode… It makes sense that it is so capable if you think about all the great PC game soundtracks made with one, but it isn’t what you’d expect to be coming out of a single modular synth module.

The Problem

In the modular world, you tend to break apart and de-integrate as much of the synth chain as possible. This is so you have the freedom to reconfigure the synthesis signal path in wild and fun ways. So a module with not only a complete voice but six complete voices is swimming against the current in how you typically design these things.

How the front and back circuit boards attach together in the prototype

One outcome of such a non-modular module is the matter of the number of possible parameters available. Typically a module might have 1-4 inputs and 1-2 outputs. That’s a gross simplification but gives you an idea of the majority of signal complexity involved. A single channel of a YM3182 has about 16. And then you have 6-8 copies of those – each voice can be configured more or less independently. So we’re talking hundreds of possible inputs.

It has one output.

So on the face of it, this is a bad match. And therein lies the hypothesis of this design. “How can you adapt a YM3812 to the modular synth design norms?” How do you make it understandable to someone thinking in terms of fairly straightforward signal chains? How do you present the configuration of a YM3812 so it matches the mental model of someone used to something like the Behringer Neutron?

A Lone Voice

I can’t do anything about the output space. There is literally only a single pin for the output and there isn’t any access to the individual voices. So right off, I decided that this prototype would be a single voice. That might seem wasteful, but I can use the other voices to “mirror” the main voice to fill it out by slightly detuning them or by playing notes related by harmonics such as octaves or triad chords.

That also reduces the input space. We’re down from hundreds to a dozen or so inputs if we’re only treating this as a single voice. Some existing designs stop there, but I wanted to go further.

Time Variations

There are broadly two types of inputs to a voice: time varying and time invariant. The time varying inputs configure, for example, the way the amplitude of the sound changes over time. In a modular synth, input like this are controlled by other modules. So I decided to discard all time-varying parameters. Parameters like the amplitude of the voice would be modified externally using voltage controlled amplifiers (VCA) just like you would do with a standard modular signal path.

This reduces the input space by half. We’re looking at about 6 inputs. That isn’t bad – there are definitely synth modules with 6 inputs. But I wanted to go further.

Digital Zippers

The YM3812 is a digital chip. All of the OPL series of synth chips are. This is what made them such a great product for Yamaha. It was easy to make a digital chip out of silicon so they could produce the entire sound path out of a handful of parts that would take thousands of separate analog components to replicate. And because it’s digital, it’s very easy to use in a PC sound card. The CPU just sets the input registers of the chip and away you go.

In a modular synth, all of the patch paths are analog: continuous time varying signals between about -10 to 10 volts. To adapt these kind of signals to the YM3812, I would need to digitize them using an analog to digital converter (ADC). But there’s a problem here too – what sort of digital resolution should I use? If I use too low a resolution, the continuous varying signals end up being converted to broad, stair step patterns. It means your smooth subtle varying input gets turned in to sudden chunky sound changes. People call this effect “zippering” because it can cause a sound similar sound when a parameter moves through those discrete stair step values. That isn’t intrinsically bad in the world of analog synths, but you’d like to at least have the option to avoid it.

Some of the input parameters of the YM3812 have a very limited range of possible input values. As an example, the strength of the feedback from one internal voice generator to itself is controlled by just three bits. That’s only 8 possible values! That does not map well to a 20v swing input.

So I took all the parameters that did not have enough bits of configuration available to be used with an analog input off the table. I would still have them accessible, but through manual switches and control knobs. They’d be more for setting the broad mode of the voice, not for use inside the time of a single note playing. That removes a handful more inputs from consideration. In fact, you’re down to only four. That is a completely respectable number of inputs for a synth module. But I wanted to go further.

A page from my notebook where I went through all the possible inputs to a voice

Getting Rational

One of the interesting things about FM synthesis is that a lot of the timbre results from the mathematical ratio between the different frequencies of the oscillators involved. In the YM3812, each oscillator has 12 possible frequency multipliers to aid in defining these ratios. So while there are only 12 values for a given oscillator, there are 144 combinations between the two oscillators of each voice. Twelve steps isn’t enough for an analog input but 144 is fine. So my final reduction was to combine the two ratio inputs into a single input.

I wanted to think about the ratios as actual ratios so I wrote this out by hand. This is the type of thing I think about to fall asleep sometimes.

And that leaves us with just three inputs: one that controls the frequency of the voice, one that controls the amount of mixing between the two internal oscillators, and one that controls the ratio between the oscillators. To put it another way: one controls the pitch and the other two control the timbre. That sounds like a perfectly understandable module. It is still more integrated than you would see in a traditional module where the timbre modification would occur in a separate module (or sets of modules), but it is much closer.

And there you have how I arrived at the final design of the prototype. All other design considerations stem from the decision of which inputs to use: the physical layout, the specifics of how signals map to sound changes, the size of the module, etc.

There are a lot of details I’m glossing over here, and I’ll talk about them more in future articles.

More ray diagrams

I’ve dug into this ray diagram sketch on CodePen because it’s pretty satisfying to twiddle the properties of the simulation and see how things change. I’ve added some sliders, but beware of the code – it isn’t pretty. I’d say it’s about reached the point of unmaintainability.

Screenshot as of this post

The UI is a total wreck, but you can currently alter all the major parameters. There aren’t any measurements yet, which is probably the next most important feature. It’s fine and dandy to move these virtual lenses around and see how the rays refract, but without the proper measurements it’s not actually a useful tool. Also there isn’t a way to change the lens type or order, and once you can do that, you’ll really want to be able to save and load a given configuration.

And this hasn’t even gotten into the optimization part! That’s the whole purpose!