One thing I'm not clear on when watching his videos is whether what he's describing is an established scientific interpretation, or his own thoughts as someone who has extensive knowledge on optical engineering (vs theory).
Very enjoyable and thought provoking stuff though!
If you want to avoid this kind of cheating, you need to approach the test as if it were proctored. Communicate the expectations up front: you'll be required to screenshare, have mic and camera on at all times, what resources they're allowed to access and which they aren't.
There are still dozens of ways to cheat even under the above conditions, but it should eliminate egregious copy/pasting from a LLM or in-person help.
Were it me, I'd have made the expectation clear that we were interested more in the line-of-thoughts and explanation than the code. The lack of communication would probably be enough to not take the application further.
Another vote for my Casio FX82. Ease and speed of use are why I prefer it to any calc apps.
Though I'll admit that I have a bias towards physical, single purpose devices. I use a physical (digital) timer in the kitchen, and a physical mechanical metronome when playing music.
You're correct that it's an exercise in introspection, rather than relying on the AI's own knowledge. It was clear to me when trying to write explanations to its questions where my understanding was starting to get fuzzy and hand wavy.
A nice bonus was to get it produce a scorecard of correct, nearly correct and incorrect explanations. I could see these as a good jumping off point for me to do more learning/research. Though I suspect the AI would be less accurate at this in a more niche topic than I chose (refrigeration).
That is an interesting conversation. I like that you had the chat bot follow up with questions forcing you to dig deeper. The summary at the end is a good idea as well.
I've been asking the chatbot to point out my assumptions and to point out weaknesses or errors in my reasoning. That has been pretty useful in uncovering aspects of my thought that aren't as foundational or coherent/consistent as I expected. It also doesn't require the AI to be an expert in order to give me feedback, it just points out unchallenged assertions and errors in logic.
I think this is a potentially powerful use for AIs. They can patiently and politely nitpick and point out subtle errors in our thinking process, providing unbiased critical feedback. That is almost as valuable as having them be perfect oracles that can answer any possible question.
> It helps that refrigeration has always sort of blown my mind, so this was an interesting topic as well.
Same! It's been in the front of mind because I've recently been watching Hyperspace Pirate's videos on building a DIY cryocooler (and have been struggling to follow it at a technical level). You might enjoy them too: https://www.youtube.com/watch?v=7QZrHzd3RA8
The linked security blog post[1] has a lot more of the technical details and can clear up some of the questions/confusion that people have added in the comments.
Those guys should not work in identity and access areas. Period. Only one reason says everything: they provide no support. Customers will be left in the cold, minus all their belongings.
But they may separate into several companies to avoid conflict of interests, if they prefer, this time with proper support and everything. Or they can be forced to do so, if they are too stubborn and will continue running their dystopian playbooks.
So, the situation is not strictly technical, it's much deeper than that. A blog post won't make a dent.
The sky crane solves a very particular set of problems with landing a (relatively) small, functional rover on Mars. It's not really a general solution to returning a large payload intact to Earth.