Seems similar (though much more advanced), to a piece of DOS software I remember floating around various "overunity" forums I used to visit years ago (I have a secret fascination with such efforts; I know they can't succeed, but it is interesting to see the lengths and effort people go for to try), though I don't recall its name.
This software would probably be very useful for designing motors and generators, and for the DIY crowd in those areas.
Yes, though these days the term 'signal processing' is usually only used for operations on the signal in digital form, ie after sampling. Since this works for the propagation of the signal in analog form, it is more useful for doing calculations about propagation of signals in waveguides or in a TWT or something.
What's your take on model accuracy? I ask because my startup has developed a high-order accurate CEM solver (think MoM full-wave type solvers but every time you double the mesh density you get a 100x accuracy improvement thanks to a bunch of algorithmic breakthroughs). But we've also been putting in a lot of work to make CAD import fast and easy and it's not clear to me what's going to get people more interested - the technical performance of the solver or the usability of the GUI. I'd be interested in thoughts on that. Or anyone interested in getting hands-on, send me an email (see my profile).
Have you tried stripline as a benchmark? It's analytic solution for infinite ground planes, so plotting error vs simulation time (mesh refinement) gives nice accuracy vs time curves. Jim Rautio did this for Sonnet a few years ago, and I then used it to benchmark several solvers. Really just to show that 2.5D solvers gives much better accuracy vs. simulation time than 3D, for planar problems.
I use AWR (Axiem, Anaylst), CST, and Sonnet. I'm satisfied with accuracy; need more speed. It's at a point where material and manufacturing tolerances are more of an issue, so I need to run parameter sweeps, yield analysis, etc. So multi-threading, GPU acceleration, and distributed computing help that.
Working on a thick patch antenna today. Axiem (2.5D infinite substrate) said 1.57 GHz. Analyst says 1.51 GHz. It actually measured 1.53 GHz. That's a 3% error on an 8% 12 dB RL bandwidth antenna. Those simulated results are after mesh convergence, so finer meshing won't help.
So I use 2-3 solvers on a single problem. It's important that the geometry translation between them be quick and easy. That's where AWR comes into play; everything plugs into it. 3D solver (modeling) is horrible for pushing polygons during the design process. 2.5D solvers are awesome for this, but then you need to sometimes push the geometry to 3D solver.
Going +1 on this. I'm not an engineer, but I'm a plasma physicist who lives doing Particle-in-Cell sims, so a very different regime and perspective.
Accuracy isn't really at the forefront of our concerns because most EM solvers since the 70's are good enough in those terms, and going to higher order methods aren't worth it for us if it is so much slower. What we need is speed. For me, give me a way to do 50 simulations in a month that are converged enough that allow me to do a parameter scan over laser phase, focal point, etc. Allow me to do more 3D simulations. That what I need. The reason is that for me, plasma is so fuzzy anyway that the nth term error in the expansion pales in comparison to if laser focuses half a micron off target, which is a much more common source of error bars in a real experiment.
I imagine it's similar for engineers, our solvers are good enough for most problems, just make them faster and allow us to do more 3D simulations in shorter time.
I went to a defense oriented EM conference about 25 years ago. One of the military guys slams a TWT down on the table and says he'll pay $1M to model this; he was serious. Of course now I can buy CST particle studio. I assume it's feasible now.
No - just pulled the paper on it and put that on my to-do list. We've been focused on large problems recently and people seem happy to stick with scattering by spheres (PEC or dielectric) and comparison with the Mie solution. Way too much symmetry to serve as a comprehensive benchmark, but a decent way to compare computational efficiency. Our current benchmark run for a 100 wavelength diameter PEC sphere is 48 minutes on 256 CPU cores with 0.13% RMS error in the far field. We recently got 1.8% far field error for the 500 wavelength case on 300 cores in 17.8 hours. Our preliminary 1,000 wavelength numbers are very promising, too. No GPU/MIC or unusual hardware for those tests - all on a cluster of modern servers with Intel Xeon CPUs with 2-4 GB RAM per core.
Finding good benchmarks for sharp corners has been more challenging. The one we've been using for that is planewave scattering by a PEC cube and we test that the fields inside are 0 everywhere (including arbitrarily close to the surface at corners and edges).
Thanks for your other comments - geometry translation comes up often. Post-processing as you mentioned elsewhere is a common pain point, too, but solutions there seem to be pretty application/domain specific.
Nice; Mie scrattering; I was explaining creeping waves with someone recently; cool stuff. I have not done too much with RCS or large structures. I'm mostly RF/Microwave circuits and antennas, though as I move into mmWave, electrically large antennas (both arrays and reflectors) will be come an issue.
Does your code handle lossy dielectrics?
Anyway to instantiate near-field excitation sources on large structures? That's one nice thing about CST; save the near field results from an FEM antenna simulation and instantiate them into a TLM simulation on a large structure.
A a side note, we had a new near field chamber installed, and the guy from Orbit/FR used to run several test ranges. Got on the topic of antenna standards (there really are not any, even standard gain horns). He developed some cylinder standards for scattering. They ship them around the world to verify ranges.
Yes to lossy dielectrics. The caveat being that right now we only have support for homogeneous materials - we have some thoughts on how to bring our methods to continuously varying materials, but that's still a research topic.
We started out focused on RCS problems for algorithm development and validation, but we're shifting to more antenna design and analysis (mounted antennas, installed performance, placement optimization). We have done near-field excitation of our own models on large structures, but usually our goal has been to maintain accuracy so our use case has us solve the driven antenna and the platform together in one go.
anyone have any experience on how it compares to meep (http://ab-initio.mit.edu/wiki/index.php/Meep)? I've used meep for a few things but am not too entrenched that I can't switch...
It uses the same method, but the focus of openEMS is more in RF devices while Meep is more focused on optics.
One important difference is that openEMS supports graded (inhomogeneous) meshes. You need this since for RF devices some features are much much smaller than the wavelength, e.g. feeding lines, and you do not want (or need) a dense mesh everywhere in those cases.
Furthermore openEMS has a small GUI to display the defined structure and mesh to inspect if your setup is ok.
https://github.com/thliebig/openEMS-Project