I’ve continued to work on the optical ray diagram tool prototype. I added a way to measure the effective focal length (EFL) of the lens system. It isn’t automatic, but by adjusting the parameters you can align an intersection at the optical axis and read off the EFL. Obviously this should be a one button click sort of thing, but it is kind of interesting to see how the various parameters affect EFL.
My main area of interest before going to automation is identifying and coding all the various measurements that you want of a lens. To identify these, I’ve been reading the excellent Applied Optics and Optical Engineering edited by Rudolf Kingslake. Chapter 3, Photographic Objectives, traces the history of the development of lenses from the development of photography to the book’s publication in the 1960s. I would recommend either starting there if you have some familiarity with optics already. If you’re new to optical design, start with Chapter 1, Lens Design. I got my copy from the public library but you can also borrow a digital copy from the Open Library on archive.org.
I realized I should change the example lens configuration in my prototype to a Cooke Triplet after reading Chapter 3. As the book points out, a lot of modern lens designs can be traced to or analyzed as variants of the Cooke Triplet. It is also unique in being only three elements but having performance that is good enough to warrant doing the work of designing one yourself. It is also non-trivial enough that you want an automated tool to design one, so it makes a good example.
The next step will be to add proper measurements of the various aberrations and distortions. I’ll be using worked examples from Applied Optics and Optical Engineering to check the calculations of my tool. The current default configuration is from this student project in MIT OpenCourseWare by Choi, Cooper, Ong, and Smith. I think I’ve already found a discrepancy in my results so my work is cut out for me.
Over the weekend I tore down an old RCA Super8 camcorder. It came with a power supply but it had already been damaged in the past. The viewfinder showed text over static and the tape mechanism just made a horrible squealing sound.
I was interested in perhaps using the imager somehow but it seems to be damaged and not worth chasing down. The autofocus and motorized zoom work great though, so I’m hoping to use those paired with a Raspberry Pi camera. Even if I don’t ever use the motorized features, they’re manually adjustable so that will make for a nice setup.
The other electronic parts of the camera are a bit too specific to be useful. I’m hoping to reuse some of the mechanics of the tape transport in my 1/4” audio tape experiments. 8mm is larger than 1/4” (6.35mm) so I think some of the various guides and rubber pinch rollers will come in handy.
Before I send all the extra parts to the electronics recycler, I need to plug it all back together and document the connector pin outs.
I’ve dug into this ray diagram sketch on CodePen because it’s pretty satisfying to twiddle the properties of the simulation and see how things change. I’ve added some sliders, but beware of the code – it isn’t pretty. I’d say it’s about reached the point of unmaintainability.
The UI is a total wreck, but you can currently alter all the major parameters. There aren’t any measurements yet, which is probably the next most important feature. It’s fine and dandy to move these virtual lenses around and see how the rays refract, but without the proper measurements it’s not actually a useful tool. Also there isn’t a way to change the lens type or order, and once you can do that, you’ll really want to be able to save and load a given configuration.
And this hasn’t even gotten into the optimization part! That’s the whole purpose!
During lunch I whipped up a software sketch based on that notebook entry I posted earlier. I thought I’d start with actually drawing the shapes required for a ray diagram, and then make it more data-driven over time. I should be able to then make a clean seam in the code where the diagram is controlled either by a human or a robot – maybe both at once!
Also I obviously need to write the actual solver, but that’s “just” a bunch of geometry and shouldn’t be too hard. I think I’m okay to assume that each ray will hit each boundary in order. If a ray doesn’t hit it’s next boundary, it gets dropped from the list.
This is pretty software-oriented compared to what I’ve been working on lately, but it’s the easiest thing to write up and very recent work.
Once my optics bench is finished, I plan on measuring all the lenses I have on hand and building something with them. To do that, I’ll have to try out designs using software. But since the design space is limited by what I actually have on hand, I have the opportunity to write a tool that uses the constraint of what I have “in stock” to reach a design goal.
The way I envision this now is some type of Genetic Boxcar2D but for optics. And the first step to that is a web technology ray tracer and optical system design tool. After all, the genetic algorithm is more or less tweaking all the knobs I’d be tweaking manually in such a tool.
Here’s a spread from my note book where I am thinking about the best way to model the domain problem of an optical system.
And here’s me thinking about it some time between PAX (West) 2017 and my trip to Italy in the fall.
Here’s a page from 2015 – at least before an HCDE info session on Oct 15th.
I know I’ve been thinking about this for at least a decade, but I haven’t found my original notes, so we’ll leave it at that.
Not to self: I need to make an index to all my notebooks.