I recall when I entered college. The first thing was mandatory, required, english classes.
The logic was, if you cannot communicate, you cannot explain why your job, or what you're doing is important. If it has value. If you have value. You cannot hope to explain requirements to others. Or explain the logic or reasons, the "why" of a technical path.
You're likely correct that a lot of people think this unimportant. To them I'd say, they're severely limiting their career, if they don't think communicating is important.
That's really interesting to me. I consider writing to be a "raw technical skill." Programming and writing are inextricably linked. The lexicon of software borrows heavily from writing: language, syntax, grammar, statement, and expression. Even the way we critique code heavily overlaps with how an editor critiques writing: consistent, readable, elegant, concise or verbose, and follows a style guide.
I have been doing backend/infrastructure coding for years and have been thinking about trying embedded work but am unsure how to break into that area. Curious if you (your industry) would be interested in someone with a lot of Linux/systems experience but not in the embedded space?
I've had a hell of a time getting into embedded linux professionally. I don't have that specific job experience on my resume, but lots of related open source work, writings, kernel work etc -- I can do this, I just can't prove it very well.
What would you recommend I do? Looking for any more devs?
A lot of work here is working on vendor provided BSP (which can range from esoteric mix of ancient kernels/bootloaders to top-quality community maintained mainline kernels) to work on your custom board/product.
The the hackaday community might be a good place to train/find such people, although the are more focused on non-Linux bare-metal code I expect.
I guess you are aware of the consulting companies in this space? Baylibre and Denx (now NABLA) come to mind. Probably more Linux embedded companies on the FOSSjobs wiki. Looking at people/companies contributing to related areas of the Linux codebase is another option.
Nobody who delivers any system professionally thinks it’s a bad thing to plan out and codify every piece of the problem you’re trying to solve.
That’s part of what waterfall advocates for. Write a spec, and decompose to tasks until you can implement each piece in code.
Where the model breaks - and what software developers rightly hate - is unnecessarily rigid specifications.
If your project’s acceptance criteria are bound by a spec that has tasked you with the impossible, while simultaneously being impossible to change, then you, the dev, are screwed. This is doubly true in cases where you might not get to implementing the spec until months after the spec has been written - in which case, the spec has calcified into something immutable in stakeholders’ minds.
Agile is frequently used by weak product people and lousy project managers as an excuse to “figure it out when we get there”. It puts off any kind of strategic planning or decision making until the last possible second.
I’ve lost track of the number of times that this has caused rework in projects I’ve worked on.
>That’s part of what waterfall advocates for. Write a spec, and decompose to tasks until you can implement each piece in code.
That's what agile advocates for too. The difference is purely in how much spec you write before you start implementing.
Waterfall says specify the whole milestone up front before developing. Agile says create the minimum viable spec before implementing and then getting back to iterating on the spec again straight after putting it into a customer's hands.
Waterfall doesnt really get a bad rap it doesnt deserve. The longer those feedback loops are the more scope you have for fucking up and not dealing with it quickly enough.
I don’t think this whole distinction between waterfall and agile really exists. They are more like caricatures of what really happens. You have always had leader who could guide a project in a reasonable way, plan as much as necessary, respond to changes and keep everything on track. And you have people who did the opposite. There are plenty of agile teams that refuse to respond to changes because “the sprint is already planned” which then causes other teams to get stuck waiting for the changes they need. or you have the next 8 sprints planned out in detail with no way to make changes.
In the end you there is project management that can keep a project on track while also being able to adapt to change and others that aren’t able to do so and choose to hide behind some bureaucratic process. Has always existed and will keep existing no matter how you call it.
Most of the people you describe here will try to start changes at the last possible second, and since our estimates are always wrong, and preemptions always happen, then they start all changes too late to avoid the consequences of waiting too long. It is the worst of all worlds. Because the solution and the remidiatiom are both rushed, leading to tech debt piling up instead of being paid down.
No battle plan survives contact with the enemy. But waterfall is not just a battle plan, it’s an entire campaign. And the problem comes both from trying to define problems we have little in house experience with, and then the sunk cost fallacy of having to redo all that “work” of project definition when reality and the customers end up not working the way we planned.
And BTW, trying to maintain the illusion of that plan results in many abstractions leaking. It creates impedance mismatches in the code and those always end up multiplying the difficulty of implementing new features. This is a major source of Business and Product not understanding why implementing a feature is so hard. It seems like it should just fit in with the existing features, but those features are all a house of cards built on an abstraction that is an outright fabrication.
> Single cycle readings defeat the point of sigma delta ADC setups.
The ADC's internal delta-sigma ADC takes a lot of samples at a much higher modulation frequency and presents them as a single output value.
You do not get the direct delta-sigma output from an ADC like this. The internal logic handles that for you. It's okay to take single samples of the output.
OP is using the chip with the data rate set to 8 samples per second.
Natively/internally, it runs at 860 samples per second, and you can configure it to provide that data at a lower sample rate at lower noise levels by averaging multiple readings together internally.
I suspect that for every one job the government would subsidize for a daycare professional, that we’d see three women enter the workforce.
That’s a net of four people employed.
I have no proof of this aside from my own experience watching parents struggle to find care for their kids. Even well off ones where I live. In Massachusetts!
What's even better is this isn't costing New Mexico that much and it removes income restrictions for daycare. I think the total budget is something like $36 million extra with about $20 million of that being capex to build new facilities.