Not my submission, but I am a cryo-electron microscopist if anyone has any questions about what's in the article, or more general. (and have worked with some of the people in the article).
I will comment that the major expensive most facilities face is the cost of the service contracts, which are partially parts, but also partially the need to pay multiple talented service engineers to be available to fly in on a moment's notice, and troubleshoot and fix the microscopes. Electron microscopes break constantly, and most users are not skilled enough to even troubleshoot, let alone fix them.
I will also point out that this part of the article:
>Levels of 100 kiloelectronvolts (KeV)—one-third as high—suffice to reveal molecular structure, and they reduce costs by eliminating the need for a regulated gas, sulfur hexafluoride, to snuff out sparks
Is wildly inaccurate. Relative to the cost of a microscope, SF6, and a high tension tank are absolutely pennies. Frankly, the cost savings are primarily in two areas:
1) The fact that Thermo Fisher isn't involved (the Tundra is a joke and a move for market monopolization)
2) Going from 300 kV (or even 200 kV) drastically reduces the needed tolerances for parts. 100 kV microscopes have been around forever, though, and almost none are going to the resolutions of 200 and 300 kV microscopes, although like Russo and Henderson, I agree that's a solvable problem. It's worth noting that the resolutions they are describing, while encouraging, are not great. 2.6 Å on Apoferritin, which is a best case scenario never seen in the "real-world" is quite a ways behind even the cheaper 200 kV scopes that have gotten down to 1.6 Å. This is still firmly in "screening and learning" territory for most flexible samples, which is not without value, but not the answer to the 5 million dollar Krios that we all so desperately want.
Re: the national centers in the article, it depends which one you go to. NCCAT is fantastic, in my experience, but S2C2 is in the costly bay area and they just can't afford to pay their staff scientists enough. So what happens if you get tossed in with a fresh PhD that is underpaid and uninterested in your project. I've seen, in general, a lack of caring by the staff there, and no desire to understand specific problems each user is trying to solve. That results in lots of wasted iterations, especially if you are starting from scratch with no experience.
I wonder how much potential there is to make them a lot more reliable and easier to use. Other kinds of instruments in the lab have gotten much more reliable and much easier to use over time, though this is not a particularly quick process. But I suspect this depends a lot on how well you can isolate the fragile parts of the equipment from the users.
Regarding the last paragraph, it's incredibly frustrating if you see time on expensive instruments wasted because there aren't enough experts around and people have to try and figure out stuff for themselves. But it seems that it's almost always easier to buy another expensive instrument compared to hiring an expert on a permanent position.
> I wonder how much potential there is to make them a lot more reliable and easier to use. Other kinds of instruments in the lab have gotten much more reliable and much easier to use over time, though this is not a particularly quick process. But I suspect this depends a lot on how well you can isolate the fragile parts of the equipment from the users.
Going to 100 kV makes everything a little more forgiving in theory. SEMs that operate in the 10 kV range can be had for a full order of magnitude cheaper, although that price probably scales a bit with achievable resolution. But for an example, the 200 kV/300 kV microscopes can't change temperature by +- 1°F over a 24 hour period without the lenses going out of alignment (presumably because of the resistance change in the electromagnets).
>Regarding the last paragraph, it's incredibly frustrating if you see time on expensive instruments wasted because there aren't enough experts around and people have to try and figure out stuff for themselves. But it seems that it's almost always easier to buy another expensive instrument compared to hiring an expert on a permanent position.
The NIH and NSF have been quite willing to provide money to purchase microscopes. They are far less willing to provide money to cover annual operational expenses.
>The NIH and NSF have been quite willing to provide money to purchase microscopes. They are far less willing to provide money to cover annual operational expenses.
This is a fundamental issue worldwide. I completed grad school in Canada, post-doc in the US setting up both labs with various GC-MS systems, then worked in the EU with laser spectroscopy and mass spec companies... Everyone had the same lament. Plenty of money for new toys, pennies for operations. Heck even now, after a decade away I decided to go back and the Prof had to apologize because in spite of nearly $10M for new equipment over the past four years, they only have $60K annually for an expert to run them. They're lucky I made bank and my wife is paid pretty well so I'm taking the plunge, but frack...
Not really. Auditors are accountants (or accountant adjacent) and have no clue about the technology. If the resesrcher says "we need it" then they have to take their word for it.
Source: 10 years on the side of companies selling custom widgets
Come on, at 200k a piece, that auditor is going to want to see at least 2 competing offers for that widget, which page-long explanations why the specific one was chosen.
And if you're going to argue nobody else is offering exactly that, a technologically literate auditor (read: external consultant) might show up when you try that the second or third time in as many years.
I'm going in the assumption that you don't have much experience with public procurement in scientific research.
1- Companies like Thermo, Agilent and all the others spend a long time developing "sole-source" documents. Effectively a list of BS specs that only they can achieve and/or offer. Sometimes it's a specific technology (e.g. "must be this patented tech because reasons" and pull out a bunch of references) or a specification. Like Thermo is infamous for "molecules per ion" with some of their mass specs. It's totally meaningless on a practical level and can't even be accurately assessed. But they made it up, and anytime a competitor tried to put down a better number the response is "prove it". The competitor can't. That's one way to ensure you'll win. It can also be buried in the main cost of the instrument, or listed as a critical upgrade, whatever.
2- you're giving waaaaay too much credit to the University or Research Institute evaluators. This has always worked, somehow or other, there's a way around the rules for a determined researcher to get their toy. Plus, if $200K is awarded for instrumentation, then $200K is precisely what the widget will cost.
You can argue with hypothetical "this is how they world should work", I'm speaking from a decade of experience of sitting in sales meetings with our rep shooting a number from the cuff that's high enough to make it worth his while to file an ROQ, but within the range to fly with procurement, and can be sole-sourced. Sometimes every year for 3 or 4 years, especially if the researcher is a star who consistently brings in grants.
Also, I don't know where you are, but if an external consultant is hired (LOL) they're probably already consulting the companies on getting around public procurement rules
I really only have experience from the research side. I've bought instruments and was audited.
I don't doubt that Thermo can argue their way out of a competing suppliers requirement. And for what it's worth, in my field there actually are suppliers that dominate the market because of the legit specs they can offer.
But a 1-man LLC, that also happens to be your instrumentation tech? 6 digits, every year, for 5, 10, 20 years in a row (because that's how long you're ideally want to keep a good tech)?
No way. You can do that twice, maybe three times. After that, you're standing in the lab, next to those three gadgets you bought, with an auditor and possibly an external consultant (and that guy is not working for Thermo, he's just a PI working in the same field as you do, having similar grants as you do, and the founding agency is paying him to be there for a couple of hours).
Universities do a lot of shady shit, with significant amounts of money. But "repurposing" 6-figure amounts of instrumentation investment every year needs significant amounts of criminal energy to pull off - or an LLC that actually provides some value by doing actual work.
If that instrumentation tech's LLC is doing stuff like pump maintenance, for example, the expenditure is much easier to explain. But doing something like that comes with lots of liability, compliance requirements, ect., and you better hope you never have a pump fail and ruin that $5M instrument...
>2- you're giving waaaaay too much credit to the University or Research Institute evaluators. This has always worked, somehow or other, there's a way around the rules for a determined researcher to get their toy. Plus, if $200K is awarded for instrumentation, then $200K is precisely what the widget will cost.
Not that there's no wiggle room, but the NIH and NSF had auditors, who then both send to field experts. They ask for photographs of the instrumentation, serial numbers, etc. Then my university has their own auditors, which do the same thing, including in-person visits and brief demonstrations, asset tags, and various other pieces of paperwork. Finally, the state itself has auditors which do also the same thing, although the state auditors would probably be pretty easy to pull the wool over their eyes.
Under $5000? Yeah it'd be trivial. But over $5000 is under quite a bit of scrutiny.
I’m a graduate student doing computational biology. I wonder sometimes what cryo-EM will look like in the future. Do you think it will ever be possible to get a cryo-EM image of a volume of tissue and get a count/location of all the proteins/RNA in each cell?
This would be the holy grail of data for my purposes but i have no idea if it’s even remotely feasible
The other answers to your question are quite on point. Of course there are many people interested in this. We aren't there yet, and probably won't ever be (using this technique), but we could begin to paint a picture with lots of correlative methods. Whole cell tomography exists, plus correlated light microscopy, fluorescent tagging of specific molecules, BONCAT, FISH, etc are all techniques that can be combined to give, at least enough of an answer to specific questions that over time we can really build a good picture.
Whole viruses, obviously, have been done and are quite interesting.
if you just want count/location, super resolution techniques (https://www.science.org/doi/10.1126/science.ade2676) and proximity labeling (https://www.biorxiv.org/content/10.1101/2023.10.28.564055v1) may be a good starting point. cryo may be able to help with that if the direct electron detectors get better (as my understanding (and in my experience), cryo-et data is quite noisy). my guess is that multiple techniques will have to be combined and processed with sophisticated computational pipelines to make this a reality. each technique provides some information on the state of the cell, so the computational question becomes if can you figure out how to integrate information between these techniques to get the specific information you want. it may be too early to tackle this, but who knows...
I'm just another grad student doing something between comp bio and structure so take this all with a grain of salt, but I'd read about cryo-ET, FIB milling, structural proteomics, and spatial proteomics (Matthias Mann's group's work comes to mind as an example of some of the wildest stuff people can do). Plenty of groups are working on what you describe but the details are tough. To give a non-technical description of the problem, most structure techniques rely on either explicit or implicit averaging to get good resolution. Small crystals or particle numbers, conformational heterogeneity, and beam-induced radiation damage are big resolution killers. People have figured out all kinds of tricks that amount to purifying a ton of protein, making sure it crystallizes well/is relatively homogenous with a good distribution of ways it sticks to a grid, and minimizing sample beam exposure during diffraction/SPA, which require relatively little, anyway. With cryo-ET, though, you aren't taking a ton of micrographs of a ton of particles or quickly zapping (well, you're rotating it and exposing it for some time under a little jet of super cold air, but hey, it works) a crystal that diffracted precisely because you got a large, relatively structurally homogenous lattice, you're taking a series of micrographs of the same cell, which is structurally probably way more different from other cells than one protein conformation is from another (and that’s if you culture/sort your cells ahead of time instead of working with tissue). Meanwhile, you're frying your sample from the prolonged exposure time a tilt series requires and not even getting a full tilt series because you can't rotate the darn sample holder 360 degrees (though people train CNNs to get around the missing wedge problem now). Once you've reconstructed a structure from your tomographs, how are you going to assign the blobs and squiggles you think you see in your gray static? You either need to know what you're looking ahead of time which requires pre-existing structures and good guesses on location (presumably reasonable to do with something like actin or microtubules, pretty tough with John Doe globular, cytosolic protein that looks like everything else at bad resolution), some kind of labeling (so now you probably need a cryo-CLEM or whatever too or to add something pretty big/distinct which comes with other problems), or you add mass-spec somehow. Oh, and I took the FIB milling for granted when describing this, btw (if you want to look at an entire cell, trying to FIB mill the entire thing is like trying to replace all your walls with glass, so we're back to sampling the right parts/averaging). If you add mass-spec, you’re basically trying to make the smallest laser micro dissection slices ever on a sample that you might have to almost destroy if you want to see more than figments of your imagination. So, hundreds of stupid little problems but people are working on it, and there's work for people with strong computer vision/ML backgrounds.
RNA is a totally different problem since it's often described as pretty "floppy" compared to protein and we don't have that many RNA structures, so presumably it's harder to assign things unless you're looking at well-studied complexes. Aren’t transcriptomics people working on this, though? Maybe MERFISH does kind of what you want just without pipelines for depth assignment yet (or maybe these exist).
>(the Tundra is a joke and a move for market monopolization)
Could you elaborate on this? From a distance it's hard to understand what the goal of the Tundra is, or who it is aimed at. I thought it may have been an initial offering while they optimise their engineering for low-keV scopes focused on biology.
The Tundra is supposed to be aimed at the same audience as this microscope. It's Thermo's smaller, cheaper, easier to maintain microscope. They are essentially trying to close off the bottom of the market from their competitors (JEOL and Hitachi). They are rightfully concerned that if people start buying 100 kV JEOL and Hitachi microscopes to learn/screen with, they will start preferring those when they go for large data collections, and that will erode the cash king that is the Titan Krios 300 kV TEM.
It's quite new, and there are many many issues with it, from a value standpoint. First of all, its' really not much cheaper than their previous budget 200 kV Glacios, and it's not much cheaper than JEOL's 200 kV offering as well. But there are rather insane compromises that make no sense.
They are saying that you need less space for the microscope, and while that's true, you also need an additional "loading station" that is quite large and necessarily complicated, that you don't need for the 200 kV and 300 kV microscopes. As an example, the loading station is a large desk that needs its own liquid nitrogen source - not needed on other microscopes. It has a computer that loads clipped grids into an autoloading arm - not needed on other microscopes. It uses the same expensive 12-grid cassette, but you can only use I think 5 of the slots in it, and only one at a time - again, not needed on other microscopes, which can load all 12, at the same time.
The microscope needs manual nitrogen filling, you can't just hook up a tank to it. You need to manually open the microscope doors, and pour nitrogen in, which is an awful idea for new users and seriously limits its autonomy. You can't just queue things up an forget about it any longer.
So in terms of ease of use, they've added a ton of complexity, but they haven't really reduced costs all that much. It really seems like a double miss.
Where it gets insidious, though, and Thermo displays their true hand, is on the software side. This is somewhat esoteric to those not in the field, but basically Thermo has always (since they were Phillips, and later FEI), sold their microscopes with a scripting package that you can purchase. You can then use that scripting package to control basically every aspect of the microscope. That lead to the development of open source software such as SerialEM, and Leginon. Those software packages are single-handedly responsible for the automation of data collection that has enabled cryo-EM protein structures. It would not have been possible on Thermo scopes without it, and frankly, Thermo didn't have the foresight to design their software to be capable of the techniques those open source packages employ. i.e. the exploding field of cryo-EM that Thermo capitalizes on by selling many 5 million dollar Krios microscopes every year would never have existed to begin with, if it weren't for the scripting package. And every year, people come up with new, ingenious ways to use that scripting package to do new, innovative things.
Well, now with the Tundra, Thermo has gotten rid of the scripting package. You are stuck using their data collection software. You can't even manually align the microscope anymore - everything is "automated" for "ease of use". Additionally, they've introduced multiple new APIs, some of which you are required to buy separately, if you want access to specific features. Essentially, they are going for two things: artificial market segmentation, and trying to get users locked in on their software from the ground up. It has not been well received, and has severe limitations, and in my opinion, could really use an antitrust investigation.
Thanks for your comment and perspective. I wasn't aware JEOL and others had 100 kV competitors suitable for imaging biological samples under cryo conditions.
The situation with Thermo Scientific is certainly concerning, and one could imagine a scenario similar to where the large software companies reportedly hire workers in part just to starve the competition of talent. (In Europe, lack of engineer availability is a major complaint I've heard from CRYO ARM users, which could fit within this framework.) Thankfully the UK blocked Thermo's attempted acquisition of Gatan, and now with the Apollo there will at least still be competition in the market for detectors.
It sounds like the situation with EPU/SerialEM has some similarities to the situation in data processing for single-particle reconstructions with cryoSPARC vs the open academic software packages. Not just the use of open licenses for code, but decades of algorithm development and open discussion in the literature clash with the potential to use patents, closed formats, and general lack of interoperability to give a single provider an effective monopoly. Maybe simply having commercial funding for UI development and testing will be enough to drive most (new) users to the commercial platform. One positive development in this area was the recent release of software for motion correction and CTF estimation under permissive licenses (https://github.com/czimaginginstitute). However the trend, at least in PDB depositions, is decidedly in favour of closed-source commercial software.
> Thanks for your comment and perspective. I wasn't aware JEOL and others had 100 kV competitors suitable for imaging biological samples under cryo conditions.
Just to be clear, I think the Tundra's market ("entry level" or about 1.5 million) is actually relatively under-served. The bad behavior is Thermo coupling their entry level microscope with a bunch of software restrictions that will prevent adversarial interoperability with other vendors, something that has not been the case in the past. Their previous entry-level line, the Tecnai Spirit 120 kV, had a full scripting package and could run SerialEM/Leginon, etc, and use Gatan or TVIPS or AMT or any other camera package.
>It sounds like the situation with EPU/SerialEM has some similarities to the situation in data processing for single-particle reconstructions with cryoSPARC vs the open academic software packages. Not just the use of open licenses for code, but decades of algorithm development and open discussion in the literature clash with the potential to use patents, closed formats, and general lack of interoperability to give a single provider an effective monopoly. Maybe simply having commercial funding for UI development and testing will be enough to drive most (new) users to the commercial platform. One positive development in this area was the recent release of software for motion correction and CTF estimation under permissive licenses (https://github.com/czimaginginstitute). However the trend, at least in PDB depositions, is decidedly in favour of closed-source commercial software.
It was nice to see MotionCor and AreTomo go open source. Generally, I'm less concerned about the reconstruction side because the file formats are pretty well standardized (despite being awful), so there's no lockout. I love RELION and EMAN2 dearly, but, particularly RELION needs to throw a FTE at a UX designer. CryoSPARC is just so much easier to use, easier to manage, easier to onboard folks. RELION really is a nightmare of complexity for new users.
You seem like you have quite a bit of experience - I'm curious what your background is?
> I love RELION and EMAN2 dearly, but, particularly RELION needs to throw a FTE at a UX designer. CryoSPARC is just so much easier to use, easier to manage, easier to onboard folks.
I've heard that the CCP-EM is working on a new front-end for RELION. It's doubtful that it would reach the same ease-of-use as cryoSPARC, but it might be a step in the right direction.
> You seem like you have quite a bit of experience - I'm curious what your background is?
I'm a freshly minted postdoc. I studied biochemistry and somehow spent about half of my PhD processing single-particle cryoEM data.
I would think that the "need to pay multiple talented service engineers to be available to fly in on a moment's notice" is reduced when you are talking about a $500k capital expense sitting idle vs $5M. If you are willing to risk let it sit a few days then you can spread the technician cost among a larger pool.
Never, ever be surprised by the ability of people with large amounts of money to buy expensive toys (err, tools) and then go cheap on all the supporting infrastructure. Like, $1M on a scope, which then runs at about 1/10th capacity because the network is too slow to drain the SSD.
I call this the "You bought a Ferrari to drive on 101 at 10MPH when you really needed a fleet of trucks" problem
I thought it was the "our university/lab wanted to put out a press release that we just spent $xx on cutting edge equipment, to prove how advanced we are" problem.
While that's true, it'd be exacerbated by a few things (in theory):
1) These instruments would still need to generate income to cover service costs.
2) Income generated per microscope would be reduced because of increased competition lowering beam time prices dramatically, so downtime is still very bad for microscope facilities.
3) More microscopes spread out over a wider geographical area means more service engineers needed (something I've experienced first-hand being in a state with only 3 microscopes, Thermo has been entirely unwilling to place a service engineer here because they can't cover the costs with just 3 microscopes, despite making probably ~1 million/year in service contracts).
In general, I think reduced costs and increased accessibility are a very good thing, but when VPRs go to do the math on these, I think they still don't make a lot of sense.
Generally most instrument vendors scale the service contract with the cost of the instrument, often around (very roughly) 10% of list price per year. So in a sense a cheaper instrument gets cheaper service. I'm not sure if you'll be sitting down longer (that often depends on how annoying or important you are) or just that parts are going to be cheaper and fewer things that can break. Some stuff is very reliable and you never really get your monies worth from the service contract and some stuff (like electron microscopes) break often.
I have a question that’s unrelated to the article you might be able to answer…is it viable to use something like alphafold to predict whether a protein variant is folding the same way as the wild type or would you always need to validate that with imagery?
AlphaFold is a real game-changer in predicting many protein structures, but its precision in dealing with single residue mutations, particularly in non-standard proteins, isn't a sure bet.
The tool excels because it's been trained on a massive database of known protein structures. It's great at making educated guesses based on that data, but it's not as reliable when it comes to variations that don't have much historical data, like specific mutations at the residue level.
For these finer details, traditional physics-based methods, like molecular dynamics simulations, might offer more insight. They really get into the atomic-level interactions, which can be critical for understanding the subtle effects of amino acid changes.
AlphaFold is likely to identify significant structural changes, but it might not be your go-to for pinpointing smaller, more nuanced shifts.
AlphaFold is basically a pattern matching program on steroids. Don't get me wrong, it's incredible, but if there's not experimental data somewhat describing what that variant would look like, AlphaFold is liable to just make spaghetti out of it.
Ugh...ok so not reliable to drive experimental follow ups it seems.
What would it cost to image a certain protein if I sent you the mutant cell lines roughly? I've got 4 mutants whose structure I'm interested in looking at. I can get into more details but essentially, what I'm trying to figure out is if a truncated version of this protein can be created that would expose all the right binding sites and maybe fold somewhat correctly in order to be delivery via AAV for gene therapy.
The mutants I want to look at aren't the truncated protein but the actual diseased types and if I saw structural problems in them, I might assume that fixing said structure could restore function.
Send me an email to the one in my profile - I'd be happy to chat a bit. Not sure a structure is the best way to get the information you're looking for.
Are you thinking of scanning electron microscopes? Those will give you surface topology - the electrons don't pass through the sample, but instead backscatter. Platinum and gold are commonly sputtered on top to help with the conductivity.
With Cryo-EM, it's a transmission electron microscope. So the sample is frozen into a very thin (~30ish nm if you're lucky) sheet of vitreous ice, and then shot with electrons. The electrons pass all the way through the ice and the sample, and you get a projection image of whatever was in that ice.
This looks like the biologist equivalent of not having to mail your punch cards and wait for the results. You still won’t be able to afford one of your own, but your lab probably can.
I know Caltech did/does use it in their undergraduate core(?), and even has Dr. Prakash speak w/ the students. That said, I'm pretty sure field application of the device hasn't really panned out and runs into resolution problems. I'm honestly not sure if that was the intention, but checking a few literature review articles and recent pubs doesn't seem to show any major success stories, and mostly notes about resolution.
My favorite invention from his lab is the Paperfuge[^1], though the device is probably too recent to know how useful it'll be. Considering that so much point-of-care diagnostics can be done with either a cheap lateral-flow kit or some Cepheid or Abbot microfluidic product though, a non-integrated centrifuge might be a bit tougher to justify.
This and the precursor article in Science were very informative to this novice. Yet both articles seemed, by the halfway point, to devolve into marketing literature for Thermofisher.
Probably not. Plenty of toxic proteins have a published structure and are quite easy to grow in a lab (i.e. an undergraduate could do it). The problem with using proteins to do evil things is that they tend to have short half-lives, aren't easily transmissible, etc. There are some exceptions to this, of course, but there's far more practical ways to do evil things.
It seems implausible, and weighed against the probability of positive scientific discoveries made with the technology, it seems reasonable to proceed with caution.
Back when I was a maker i talked with the FBI and they said they knew it was OK/legal to work on viruses in your garage, "but just be sure to let us know if you see somebody doing something dangerous"
Damn near every university microscope facility has a spinning disk confocal microscope that costs as much so yeah if it only costs half a mill instead of 10x itll be more widely available.
I will comment that the major expensive most facilities face is the cost of the service contracts, which are partially parts, but also partially the need to pay multiple talented service engineers to be available to fly in on a moment's notice, and troubleshoot and fix the microscopes. Electron microscopes break constantly, and most users are not skilled enough to even troubleshoot, let alone fix them.
I will also point out that this part of the article:
>Levels of 100 kiloelectronvolts (KeV)—one-third as high—suffice to reveal molecular structure, and they reduce costs by eliminating the need for a regulated gas, sulfur hexafluoride, to snuff out sparks
Is wildly inaccurate. Relative to the cost of a microscope, SF6, and a high tension tank are absolutely pennies. Frankly, the cost savings are primarily in two areas:
1) The fact that Thermo Fisher isn't involved (the Tundra is a joke and a move for market monopolization)
2) Going from 300 kV (or even 200 kV) drastically reduces the needed tolerances for parts. 100 kV microscopes have been around forever, though, and almost none are going to the resolutions of 200 and 300 kV microscopes, although like Russo and Henderson, I agree that's a solvable problem. It's worth noting that the resolutions they are describing, while encouraging, are not great. 2.6 Å on Apoferritin, which is a best case scenario never seen in the "real-world" is quite a ways behind even the cheaper 200 kV scopes that have gotten down to 1.6 Å. This is still firmly in "screening and learning" territory for most flexible samples, which is not without value, but not the answer to the 5 million dollar Krios that we all so desperately want.
Re: the national centers in the article, it depends which one you go to. NCCAT is fantastic, in my experience, but S2C2 is in the costly bay area and they just can't afford to pay their staff scientists enough. So what happens if you get tossed in with a fresh PhD that is underpaid and uninterested in your project. I've seen, in general, a lack of caring by the staff there, and no desire to understand specific problems each user is trying to solve. That results in lots of wasted iterations, especially if you are starting from scratch with no experience.