Hacker News new | past | comments | ask | show | jobs | submit login
AI can catalogue a forest's inhabitants simply by listening (economist.com)
192 points by helsinkiandrew on Oct 29, 2023 | hide | past | favorite | 104 comments




I wish car engines had mics built in, as well as one close to each suspension.

It would record for the first 5 minutes, then at certain intervals or at certain conditions.

The car would give you FFT images which you could then look at and see how it changes over the years. No need for online-stuff, just an USB port where you stick in a big stick and software for your computer/tablet to evaluate/visualize it.

If you'd then see some problems, you could ask for the audio to be recorded with it, which you could send to a friend who knows about cars or to your car workshop.


At Heading On we had exactly this idea, plus using Canbus, to detect when cars would need maintenance. We ended up rethinking the idea because most of the people we spoke with weren't interested.


Yea, the bigger issue is that the maintenance itself is expensive and disruptive.


The old adage "you can schedule for maintenance or your equipment will schedule it for you" is true.


There are companies working on that using AI:

Acoustic sensors in the car: https://v2minc.com/#solution

Skoda made s sound analyzer mobile app 3 years ago: https://beebom.com/this-ai-app-detects-internal-issues-in-ca...


Ah damn I need this! My Skoda has squeaky brakes, except they only squeak when you aren't pressing the brake. Clearly something is rubbing but I can't see what, and it's intermittent and also impossible to Google.

I'm skeptical it would actually work well though.


I'm surprised after this many replies no one mentioned that cars do have something pretty close to this across virtually every modern ICE: knock sensors.

Knock sensors are just ruggedized piezoelectric microphones that bolt onto your engine: when they detect a knock it's by transmitting the sound a knocking cylinder makes as an electrical signal to the ECU.


When we designed our DARPA Grand Challenge vehicle back in 2003 I considered using a guitar pickup for that, to pick up vibration and get a measure of how rough the road was.


I would love something that could use audio to diagnose problems in bikes. Issues can be hard to replicate on a repair stand when the bike is not under load.


The concept you outlined is technically very feasible but economically prohibitive. There are high value situations that incorporate these techniques as standard practice. For example, the diagnosis of rolling mill fault conditions with FFT vibration monitoring has been in practice for at least 50 years. Of course when a cold strip mill goes down, the damage can be measured in megabucks.

..


Nothing about this is economically prohibitive. This is predictive maintenance 101.


For consumer level cars, it is not hard to predict or catch something that is actively failing with simple visual inspections, and scheduled maintenance.

You can spend millions designing a fancy acoustic system to measure things indirectly, which will need to be installed at a cost of hundreds per vehicle and the be maintained itself. Or you just have the mechanic give it glance during an oil change (which smart mechanics are already doing free of charge), and pay attention to the dozens of existing sensors that are already reporting the critical stuff. Acoustic sensors won’t catch everything so periodic inspection is still needed.

Worth noting that all modern car engines already have acoustic sensing in the form of a knock sensor, so the car companies are well aware of sound/vibration sensing technologies.

In other words I doubt that a fancier acoustic monitoring system would prevent enough maintenance/damage to justify its own cost.


Most people don’t care about maintenance. They do it only when they must. So I doubt they would pay a penny extra for accurate forecasts of maintenance needs. Insurers and other types of dealers might though!

But wouldn’t constantly changing road types and weather conditions create so much variable noise that trending would be sloppy?


True, but people who buy expensive cars care, which is exactly the target for such a system I imagine.

I am not a mechanic but can at least hazard a guess at what's wrong with a car from the way it sounds with reasonable confidence. I imagine a purpose build AI design by the car's manufacturer with multiple microphones in the engine compartment and around the car could gain quite a lot of insight, perhaps even outside of identifying maintenance problems simply to tune the performance of the engine, especially when combined with other sensors.

Having said that, we seem to be near the end of the line for ICE vehicles, so I wouldn't hold my breath for such a system to be developed.


One of the manifestations of "I don't care about maintenance" is taking a car to the dealer/whoever on a regular schedule without understanding what they need to actually do to the car -- which is not caring about maintenance by having someone else care.

Having a report that's quickly spit out to the mechanic about issues over the past X,000 miles so they can quickly quote to the customer before it becomes a bigger issue would be nice.


It's a though market to enter, someone I know suggested doing this for a company that had a fleet of 1000+ vehicles and the representative from Bosch told them they wouldn't provide them with diagnostic tools anymore if they did.


>The car would give you FFT images which you could then look at and see how it changes

Good idea, though AI is much better at doing that in 90% of situations. So filter it thru AI first


As darling of data science Andrew Ng said, "90% of machine learning is feature engineering". So you should probably run the STFT spectrograms through machine learning models rather than the raw audio signal. And anyway raw audio is over 41k samples per second so quite low information density compared to some simple and quite common transformations like STFT anyway.


I've been thinking the same, but for PCs.

I guess nowadays SMART is pretty good, though, and you can hear if your fans are dying :). Maybe for computers that don't have RPM monitoring and aren't being listened to? It would at least be interesting data to correlate with other metrics.


It'll only make sense if insurance companies can use it to reduce your payout in an accident.


People might be interested in BirdWeather (https://app.birdweather.com/) which collates data from people's bird listening stations (raspberry pi's with a mic and a modest NN to classify what is heard). Basically, the technology for this has already arrived as consumer tech.

As someone who is a fairly serious volunteer birdwatcher, we could absolutely do with more of this. There are not enough bird surveyors (paid or volunteer) and audio moths hooked up to AI capable of filling in the gaps are badly needed to support conservation and re-wilding efforts.


I thought about setting up such a device outside of my window (there's a lot of birds nearby), and the obvious concern of setting up a mic in your house and uploading recordings is privacy.

I found two privacy policies related to Birdweather:

1. https://www.birdweather.com/privacy

2. https://birdnet.cornell.edu/privacy-policy/

The first one mentions the following:

> You may choose to keep your audio recordings private, so that other Bird Weather users cannot access or listen to those recordings. This is configured on the Settings page.

Except Birdweather explicitly mentions they're based on Cornell's BirdNET, whose privacy policy (#2) says:

> BirdNET is an artificial neural network that identifies bird species by sound. Our servers process small audio snippets recorded with the BirdNET app to detect and recognize bird sounds. We store all submitted recordings on our servers. Therefore, we advise users not to submit any audio recordings that they might consider private. The collection of audio data helps us to improve BirdNET and we will use those recordings for research purposes only.

Seems like Birdweather would benefit from clarifying how their privacy policy plays together with BirdNET's.


Seems clear to me:

> Third-Party Research: We share the bird detections and audio as well as labeled data with the Cornell University Laboratory of Ornithology, so that they may use such data for scientific research and to help improve the accuracy of the bird sound detection.

You can set the audio recordings to be inaccessible for other Bird Weather users. Ie. by default audio is shared on the website, but you can turn that off. Independently, they also share the data with Cornell.


> BirdNET is an artificial neural network that identifies bird species by sound.

A while ago there was an HN thread raving about how Seek by iNaturalist was fun (true!) and did such a good job of identifying whatever it was you were looking at (maybe!) and not overstepping the bounds of what it could know for sure. (See below!)

So I set up an account and I tried it out.

It's pretty clear that Seek places far more weight on giving you a full species-level identification than it does on whether it can be confident that that identification is correct. I guess it's possible that the stand of groundcover I found on the shore of an artificial lake in a public park is a different species from the stand of groundcover with identical coloration and shape about 12 inches away... but I doubt it. Even if they were different species, I'm pretty sure my low-resolution images of two stands of plants with no flowers or seeds wouldn't be enough for an expert to tell them apart.

In a parallel occurrence, I took a photo of a local beetle that Seek identified as "Strawberry seed beetle", harpalus rufipes. I uploaded that to iNaturalist proper (labeled "beetles"; I already didn't trust the identification) and checked out some nearby photos. A very similar-looking beetle had been photographed in my area and identified as harpalus sinicus. So I left a comment asking how the tagger could tell that this was sinicus and not some other kind of harpalus. And I got a response, saying "I'm not trained in entomology and I couldn't explain what the difference is. But me and my friend think this is sinicus."

I harbor a sneeking suspicion that the "friend" was Seek.

They advertise that Seek is trained on labels from iNaturalist. But those labels appear to be generated in large part by Seek. Something needs to change.


As a botanist, I have been pretty impressed with Seek's plant IDs. If it is uncertain, it will almost always only ID to genus or family. It doesn't always get IDs right, but it's good.


I've been using BirdNet-Pi for a while and it's great fun. I'm a little confused about the commercial offering from BirdWeather - the PUC looks like a neat product, but I understood most of the BirdNet stuff to be Creative Commons NonCommercial. Maybe there's some nuance about how they're using it, or maybe some pieces have a more lenient open source license?


The PUC folks obtained a separate commercial license from the Cornell Lab of O.


Thank you for posting this. I'm going to see about setting up one of these for my mom, who is a VERY avid birder (I already have the Pi and outdoor enclosure). It looks like one slightly difficult issue is finding a good weatherproof microphone that works with the Pi.


How does one become a volunteer birdwatcher? Who are you volunteering with?


I volunteer with Birdlife Australia. If you are in the USA I think it’s called the Audubon Society. They will have a number of surveying programs running at any given time.


It’s terrible, but this technology is likely coming to a bar/pub/cafe/store/park/trail near you… diarization, voice recognition… I’m concerned that in the near future, humans can no longer assume that their words are not being captured and potentially used for who-knows-what purposes.

The commons, Alexafied.


It's not like recording technology is new. It's only because most countries have laws providing that tools like these are not widely used by government. And I don't believe AI is going to change the laws significantly. I don't believe most of the 1984 stuff will happen at least in near future.


Easily solved with more tech! Subaudible recording with mic on throat —> encrypted transmission to peers —> peers’ earbuds. Conversation 2.0, or something. Sarcasm, but wouldn’t be surprising if it happens eventually


Basadd!!


Freedom and privacy are rapidly going to become a thing of the past. You won't be able to escape even if you attempt to. The world is becoming a disgusting place and I want none of it anymore.


Statement detected on web platform 'HackerNews': Autocorrelation in progress... Identity found: Subject #ThrowAway1922A

Warning: Potential dissident behavior. Monitoring level upgraded to TIER-3. Recommendation: Dispatch surveillance drones for real-time observation.


What’s more scary is you dont even need the tech of the future to do this.

Just combine together the tech of now with some clever optimisations to keep resource usage low enough to do it at scale.

When you think how they are slowly ripping apart schools systems, corrupting the integrity of our institutions both technical and philosophical.

It really makes you think, if its all in the works


Time to make the peasants peasants again, they wanna go back to feudal societies.


I think it more likely that it becomes something you pay for. Ads only pay so much, and richer people (upper-middle-class and above) can afford to pay more out of pocket than ads targeted to them are worth. Or, at the very least, it's worth enough for third-party non-ad companies such as LifeLock to run interference on other companies privacy invasion tools.


Time to normalize personal white noise generators.


I think that would just be an arms race that AI would undoubtably win at the expense of our sanity.


It's not humans vs AI.

It's intrusive humans empowered by AI vs privacy-protecting humans empowered by AI.


There is some future where AI, deepfakes, and what not become so indistinguishable from real people, that those industries (news media, social-media, etc) that make sport out of hearing "wrong speak" and performing shaming rituals will become obsolete.

Same with the security state, if all the data is suspect, does it matter?


Fairly certain it's already happening in you cell phones.


My phone has a hardware kill switch for the mic/camera.


Last time I was sitting in the garden, I was wondering how far we are with actualy understanding the bird chirp and tweets. I felt like there were certain patterns I could hear repeatedly in certain situations (danger approach warning, feeding communication, always the same sound before one bird flew out and collected food kind of like "make us breakfast" :P). Unfortunately some quick searches on Google scholar didn't help much, I feel like I'm lacking the right search terms here. Most ML in this area seems to be related to species classification. If there's any bird experts reading, what is the state of the art of "understadning" bird language? Any paper recommendations? I'm curious about basic things like...is there a universal bird language or does each species use their own language (can a crow understand a sparrow), are there some sounds that are identified with meaning etc. Edit: just checked my bookmarks and "Bird-DB: A database for annotated bird song sequences" was the most interesting find, along with some interesting datasets (NIPS, Cornell Birdcall).

I was envisioning a ML powered device that could translate bird chirps to human readable text but my guess is that we're still in the fundamental research phase. One idea I had was recording stuff in my garden and running it through some unsupervised algorithms to identify some patterns than match it with video and maybe tag context. Would be really neat to find a pattern and see it correspond to the same situation (cat approaching). Seems like a neat summer project but I'm not even sure how to tag and label things :)

A first version that identifies the individual birds and their species would already be cool and I suppose feasible. Bird A (sparrow) speaking...bird B speaking etc. would already be fun...might give them random names, too instead of A,B,C. That would also help with the next stage for tagging etc.


Aza Raskin's earth species project is working on this. He gave a ted talk about it recently, in which he also recommends a couple books.

https://youtu.be/3tUXbbbMhvk?si=6ESL-zuPIHXFULbC


I recently saw Nathan Pieplow's talk The Language of Birds. It was a pretty good intro to the kind of things we know about bird communication (a fair amount) and what we don't know (a lot).

https://earbirding.com/blog/talks

Maybe you can find a recording somewhere or further references.

The whole concept of birding by ear is pretty cool. We always have the Merlin bird sound AI app at the ready whenever we're hiking.


Thanks for the tip! It does look like there are several recordings on YouTube. I’ll have to give this a listen.

https://youtu.be/IO04p2qM5Y0?si=WY6MzH2c9QGcOHsf


I use the Merlin app which is pretty accurate at identifying birds by sound. Once it hits on a bird it will give you a list of different calls that bird makes regarding what it might be trying to do (aka mating, predator near) so you can match it further but that part is not automated yet.


I've been living in southern Mexico in the jungle for the last year or so and this app has been a blast. Whenever we hear something new we all whip our our phones and learn something.


I believe ML could certainly perform cataloging (clustering). That would be step one to assigning inherent meaning.


this guy sums it up pretty well:

https://www.youtube.com/watch?v=By_rSYI0Ovs


Here’s a link to the referenced article: https://www.nature.com/articles/s41467-023-41693-w


Thank you kind sir. I feel like HN should be better at linking directly to sources.


Log:

   Human footfall.
   Branch break.
   Human footfall.
   Branch break.
   Weapon cocking, AR-15.
   Weapon firing, AR-15. No hit.
   Weapon firing, AR-15. No hit.
   Weapon firing, AR-15. No hit.
   Weapon firing, AR-15. No hit.
   Weapon firing, AR-15. No hit.
   Weapon firing, AR-15. No hit.
   ...


Makes sense. Is "monitor forest restoration" the new "search and rescue" euphemism for military applications?

"We're going to use this robot to find people inside of urban rubble... uhh.... after earthquakes of course.."


Incompletely. This faces a similar issue to much of the bottoms-up air quality work I've been working on.

Listening means that:

(a) coverage is limited to receptor area, so proximity is required (proximity bias)

(b) the inhabitants have sufficient volume to be heard (frequency bias)


Not necessarily true for (b): quiet inhabitants may be ratted out by other inhabitants. Birds will literally call out "snake" (in their bird language) or "puma" to alert others.

Stuff You Should Know covered it on an episode "How We're Learning to Talk to Animals": https://www.iheart.com/podcast/105-stuff-you-should-know-269...


There is an app which [greatly] assists in determining if government-minted coins are in fact solid AUTHENTIC gold bullion -struck. It listens to the resonance after you tap the coin lightly with a lighter.

Something to do with harmonics [I have a fairly good ear for this, myself, but only with well-known currency]. Similar to a digital guitar tuner.

But this simple app really left me awe-struck. Many of the laser testing machines can be fooled in certain gold-plated frauds. Gold's "ting" is unmistakable. It's nice to have a virtual co-agreement when dealing in high-stakes quick transactions.



I've often thought about how this sort of thing could be used for site security, much like cameras are used. E.g. it could hear a person's footsteps, and hear a person breathing, and it could distinguish between a person and an animal. With multiple microphones, it could determine a location.

I wonder if this is being done already.


There's a system being used now to attempt to triangulate gun shots, but it's not widely praised as being very successful


From what I understand about shotspotter it is really good at triangulating the location of a sound.

The issues it runs into are all human. It uses humans to determine whether a recorded sound is a gunshot or some other sound. It is deployed selectively in cities, in areas chosen by the police department (very frequently in minority neighborhoods). Conclusions have been used as evidence in court without exposing the underlying data and algorithms used to reach the conclusion.


The dept of conservation in NZ is using a similar thing in remote locations to detect the call of a bird that we haven't seen in a while. It is thought to be extinct, but we thought the takahe was extinct too. From memory it's the one shown at the end of ”Hunt for the Wilderpeople”. Keep it skux.


Awww chur bro


Miniaturize it and make it into a small, waterproof, rugged device that I can take backpacking with me and I'd pay a handsome sum.

(I don't want a cell phone app, my cell phone remains off while hiking)


I made it! The device is small and waterproof, but its battery weights 10 pounds. Continuous FFT processing and other mathematics are power hungry. Is was vibration logger for industrial applications. App would be definitely better for you.


No cell service where I'm going. Online apps are useless on most of the earths surface.


It's headlines like this that really reinforce the idea that the main hinderance with AI is our intellectual ability to utilise it for things


I'm confused. How is this evidence for that? It seems evidence that people are thinking of interesting ways to use it, not the opposite.


My reading is: “This is a great use of AI. Others who complained AI doesn’t have good applications are lacking imagination.”


You've seen people complaining that AI in general doesn't have applications?

Even more on pattern matching, where people have been using AIs for applications almost exactly like this one for more than a decade?


Sorry yes, this comment is correct


AI will do for human brainpower what steam engines & electricity did for physical labour.

That's (probably) a big win for society, but not for everyone or in every application.

So... Your AI Use May Vary (YAIUMV).


What? Where is that idea mainstream?


I wonder if we can ever do this with babies to some degree of even limited accuracy. I know there was some efforts on this before.


Ai to identify how many babies are crying in a forest? I think it’s possible.



I had the exact same idea, after I wondered why my dog seemed unnaturally eager to go out in the yard, even after he'd just been.

It turned out there was a possum in the yard. I know they're quiet, in general, but they do make the occasional sound. I was thinking he must have heard it, which led to... this idea. Naturalists seemed like an obvious client.


In a non verifiable way, sure.


Could it do this for a city?



Worth noting that ShotSpotter isn't AI, and also largely relies on humans to listen to and classify sounds that are picked up. Based on what we've seen with other AI systems applied in these contexts we can only assume it would be even worse at its job in maintaining fairness.


"Hey Alexa, document civilization"


“Hey Alexa, survey North Sentinel Island”


I think the local inhabitants have not read and agreed to the terms and conditions of big tech :-)


"I detected a human, talking to her friend Alexa"


Can you imagine if they took the same money and time and just planted trees? No more global climate change.


I really wish journalists could cut the crap. If science reporting is this, that's grim

There is plenty of interest in covering ideas, papers and efforts that are speculative, hopeful or early as what they are. Early, occasionally pioneering efforts to achieve something which may lead to success in the future, or enable new efforts to sprout at a tangent.

Interview researchers. Ask them questions. Interview reviewers and peers. They're accessible. Communicate science. As it is. What science is, is worth covering. Do your job.


I'm not clear as to what your actual issue with the article is? It's a very short article, that clearly sets out the potential and alerts the reader to an interesting development. It includes a quote from an author, so they have spoken to him.


It's puff, talking about what would be a breakthrough discovery without taking an interest in any more than it takes to generate buzz/content.

How are be supposed to know anything about the space, where it is, how much promise it has, what that promise actually is. There's no meat in this burger.


Have they really spoken to him, or just summarised the paper (perhaps poorly), or clipped quotes from a university press pack?

The Economist:

    Then it was the computer’s turn. The researchers fed their recordings to artificial-intelligence models that had been trained, using sound samples from elsewhere in Ecuador, to identify 75 bird species from their calls.

    “We found that the AI tools could identify the sounds as well as the experts,” says Dr Müller.
The source paper:

https://www.nature.com/articles/s41467-023-41693-w

    In the next step, we applied an independent artificial intelligence model for bird species identification, developed and trained in the region of our study prior to our sampling.

    Despite the model identifying only ~25% of species detected by the experts in our data, the model-derived first community axis was the single best predictor for expert-derived community composition. 
The thrust of his work is about bird population estimates via vocalisation being used as a predictor for insect biodiversity - number and breadth of insects.

There are several recent papers about this work, one that focuses on the bird <-> insect connection, another that looks at the AI utilisation to identify birds.

It appears that as of now the AI isn't as good as identifying birds as experts, only being trained to recogise a quarter of the vocalisations, however that's enough of a bird sample to get a reasonable estimate on the insects.

I strongly suspect (I lack the smoking gun) that The Economist has just sourced "quotes" provided in the full university presser (the Press Release pack they send out).

See, the lesser presser about the AI work: https://www.uni-wuerzburg.de/en/news-and-events/news/detail/...

and a presser about some of the insect work: https://www.uni-wuerzburg.de/en/news-and-events/news/detail/...

There are multiple studies and multiple papers stemming from this project:

https://www.biozentrum.uni-wuerzburg.de/en/ecological-statio...

and there'll likely be bigger press packs not directly public web hosted for newspaper journalists to pick from.


Thank you!

I guess I deserve the downvotes for coming in hot and opinionated.

In my (admittedly lame) defense, mine literally is an off the cuff comment in a place intended for such.

Your comment demonstrates well what I was (failing) to get at.the project is what it as and where it is. Whether or not that's really interesting is for a science journalist to decision.

It can't be interesting enough to write an article about and not interesting enough to investigate curiously.

It's not that hard to investigate. The people are not hard to reach. The good questions are not that hard to establish.

Why can't we have better? I'm genuinely perplexed.


> Why can't we have better? I'm genuinely perplexed.

If you step back and think of "politics" as "the science of human behaviour" then Manufacturing Consent: The Political Economy of the Mass Media (1988) is still a worthwhile read. *

    It argues that the mass communication media of the U.S. "are effective and powerful ideological institutions that carry out a system-supportive propaganda function, by reliance on market forces, internalized assumptions, and self-censorship, and without overt coercion"
- https://en.wikipedia.org/wiki/Manufacturing_Consent

In short, TLDR, mass media (eg: The Economist) tends to take the path of most return (sales, eyeballs) for least effort.

AKA it's much cheaper to be lazy.

They don't have the resources to have experts in every technical domain that sit about and read as many newly released papers as they can in order to pick out the gold for you.

They (and many other such public media newsrooms) skim the press packs sent out by university media teams, by political party media teams, the feeds curated by Reuters, third party summary feeds of stock market news releases, etc.

On the big stories the larger papers can spare the time of gumshoe investigative reporters to dig into the heart of an event as best they can. Specialist public journals (such as, say, New Scientist) will background and go into more of the lesser stories specific to their domain (it's The New York Times of anglosphere science).

But a great deal of the news that you see is first order lazy reporting, it's ideally assembled by reshaping press releases of self serving sources with a few phones calls, but more often than not it is nothing more than entire blocks of PR material 'plagiarized' as written with nary a phone call to fact check the basics.

---------

* Worthwhile regardless of political party preferences, opinions on Chomsky, whether you thoroughly read the entire book or just skim to get a feel of the thesis and arguments made.


Idk... That's a little abstract for me.

Anyone can write and distribute the good version of this article. Major publication, a nobody-reads-it blog whatever....

If they're practiced at this type of work... It's probably a few days worth of work... Probably spread out over a longer period.

It's possible to do this in a way worth reading and writing. The better version would do well on HN. Where is it? Why do we get this version? Why did economist publish this very?

I don't think HN is being manipulated as an "effective and powerful ideological institutions that carry out a system-supportive propaganda function, by reliance on market forces, internalized assumptions, and self-censorship, and without overt coercion.""

There's a reading of Chomsky whereby this is at least partially emergent. There's a reading of Chomsky (maybe closer to his intent) where it's all a big conspiracy, with market forces filling the space where a belief in wide-scale collusion is untenable. That is, imo, crossing the line into superstition.

The alternative is thateconomist science writing is an abstract product, like code, like education, like a lot of things. It can suck, or it can be better without conspiracy, or secular prophecy.


The Economist are op-ed writers, not journalists. They don't do interviews and haven't broken a story in their 150 year history.


A thing stands out to me about the machine learning is their ability to recognize patterns. We all know that human brain is also well adapted to this task. So adapted in fact that it is probably one of the most fundamental, ingrained thing that we have in the lizard part of our brains.

For some reason, to me this kind of subconscious thing hold some special meanings. It feels like the current LLMs are actually going in the right direction to eventually emulate a conscious mind and it shows this by imitating what a mind without consciousness would look like.

Personally, if I have to compare the current LLMs to an organic equivalent, I would say we are at the insect level. It can respond to things, know a few surprisingly complex knowledge (spider webs), but ultimately doesn't "understand" what it is doing and just robotically do what it was programmed to do.

Good thing is an insect to even the dumbest vertebrate is pretty far away in term of mental cognitive ability so AIs aren't going to spontaneously gain awareness anytime soon.


With so faulty an understanding of animal cognition - with not even the knowledge that insects are animals! - I would hesitate in your shoes to attempt any sweeping conclusions about mechanical cognition.


You are right, initially I was thinking of a chicken. Then I forgot what branch a chicken is supposed to be in and wrote animal instead. Changed to vertebrate now. Probably closer.

And yeah, this is just my personal view. I didn't say it is what it should be or how it actually is. Nothing sweeping about it. Simply how I feel based on my experience.


Your experience ill equips you here, then; along with being overly general, your analysis is, I reiterate, very poorly informed.

Spiders aren't insects, as I would have noted earlier had I been closer to awake when I posted my prior comment. They're arachnids. Arachnids and insects are both arthropods, but are no more closely related than that. It's not a rare confusion among laypeople, but it is also among the first things anyone learns in the study of either class. That you do not know it bodes ill for all that follows.

You mention spiderwebs, which is especially remarkable because they are certainly among the best studied examples of unhuman engineering. Relevant research isn't hard to find, and you will if you feel like it; rather than dig up the same links you can, I'll tell an anecdote that bears them out.

A couple of years back, an orb weaver began building her web on my porch every night, for the moths and flies drawn by the light from the front room. Because she opted to build right across the steps down from the porch, one night I faceplanted the web and broke a few strands extricating myself from it.

The next night and every night after, she built in the same place, but not in the same way; instead of the classic round orb-web shape, she built wide and squat across the top of the space, retaining about the same prey-capture area and staying within the volume that was busy with potential prey, but with the lower edge of the web now about six feet above the porch floor.

That's an interesting figure, because it exceeds by a couple of inches both my height and that of my partner at the time. The spider had no opportunity to size her design to us by eye, as neither he nor I was on the porch while she built this way the first night, and orb weavers' eyesight is far too poor for such a task in any case; by the time we came along to sit on the porch that evening, she had already finished her work.

To build as she did, then, she would of necessity have had to understand and remember the spatial relationships involved in my face getting stuck to her web, and then on the next night implement a novel modification to her design such that we could and did pass beneath without conflict - not just with the web itself, but also with its guy lines, none of which crossed the space below the web despite the newel post finials offering a very obvious and easy pair of anchors. And all this while working with useful visual acuity in the span of perhaps one or two centimeters - to a scale which, in our terms, would be that of about a ten-story building.

This is not the behavior of a creature operating solely on unconsidered instinct. I can't speculate on what goes on inside a spider's head, but that there is something going on in there, I know better than to doubt; even had I been inclined otherwise, the actions of one spider over the span of a couple of evenings would have amply sufficed to disabuse me of the error.

Lest it be said I am ungenerous in my critique, I'll note that, overlooking the taxonomic confusion already discussed, your understanding of insect behavior is roughly on par with that of the naturalist Jean-Henri Fabré, whose writings on spider wasps were much celebrated in their day - which was roughly contemporaneous with the American Civil War. Much has been learned since, most of which Fabré has probably been unable to appreciate owing to his death in 1915. You and I are more fortunate. So if nothing else, it may be worth your while to take advantage of your luck, and pursue your study at least to within shouting distance of the present day before attempting to generalize.


Consciousness has nothing to do with cognition. Also, if you look at any complex natural system long enough, you will see an expression of learning. In that way humans and AI both are not particularly unique except in the cases of their input formats and the capacity of their internal representations. There is no step change from non-learning to learning system, it's all a matter of their "computing" power so to speak.


I believe people who know about the current state of LLMs and/or human/animal brains have very different opinions than yours

Common sense and metaphors don't really hold up in this context




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: