Hacker News new | past | comments | ask | show | jobs | submit | more Berniek's comments login

I suspect Iphones actually cater to the way people have recently been educated. As I grew up my schooling taught me as a first principle that to learn something you had to understand it. Now my children/grandchildren are quite happy to learn about things and never seek to understand them. As consumers those same people just don't want/need to know about security or crashes or how everything connects they just want it to work seamlessly. Iphones do exactly that. As consumers they don't care that IOS is Unix based, they just want to send messages, photos text etc. You get a choice of 1 for most programmes. With Android you get multiple choice to do anything, some work some don't. Political propaganda says that this program or that is monitored by (insert choice of "nasty" government) so consumers can't tell what is correct or not so they avoid making the choice by going Iphone. By the way it is probably more likely that we do not have to understand things to use them. No? Ask people how the stock market works and you will never get 2 answers the same.


Well, 6.3V is an interesting value. The European standard voltage AC is 220V, many countries use 240V (or 230V) So we want 5V to power our whatever. Regulators need some overhead usually .6~1.2V so we need 6.2V as a minimum. BUT THAT IS DC. When you rectify AC (turn it into DC) you actually get PULSES of DC with peak of 1.414 time the AC value. To make it straight DC you need a capacitor. But as you draw current from the capacitor it can't charge up (it has a charge time constant caused by the impedance of the transformer and its capacitance) so you get ripple. The more current you take, the more ripple. So you put a regulator to keep the DC output below the ripple as much as you can. So now to do the maths. Using the SAME transformer on each of the supplies we get the peak output of 8.9V on 220V & 9.7V on 240V Now we need 5V so that gives us an overhead of 8.9-5 = 3.9V and 9.7-5=4.7V. That is plenty of overhead if we had DC but we have pulsed DC that is smoothed by a capacitor so as we draw current the capacitor can't stay charged fully so you end up with DC with ripple at the input to the regulator so that overhead needed by the regulator at the troughs of the ripple can be eroded. So you can drawer less current or give more initial overhead. The more overhead you start with the more heat the regulator has to deal with so you need it to operate with just the right amount of overhead. By the way the ripple component is like AC and its heating effects are actually reduced (the maths says its about .64 times the same value of DC). This is true for the non switching regulators, with those there is a new set of problems with the output ripple (which is usually very high frequency and easier to filter out even with small value capacitance). This may cause radio interference within the circuits but good design should eliminate that. The ideal overhead seems to be ~4V. The regulator power dissipation (it gets hot with bigger overhead) is a trade off. You can operate with less but you need a bigger transformer (to supply more charge current to the transformer or bigger capacitance and that means more cost). By the way so you have 110V or 120V AC. The 2 values are directly related to the 220 & 240V.

But why 6.3V? Well like a lot of traditional designs, the heater voltage of vacuum tubes was nominally 6.3V so the transformer design was already done. Also putting 6V battery with vacuum tubes is likely to shorten their life, (the DC equivalent of 6.3V used for heating is 4V), but the early batteries had a fairly high internal resistance so the voltage supplied to the heaters was usually much less.

A quick note is in order about values. Why pick individual values? History and experience tell us that certain values are efficient. Metric is very good for measuring distances, but not good for measuring bolts where the imperial system reigns supreme (a 1/2 inch long bolt has more useful applications than a 10 mil long bolt where a 15 mil bolt is too long). Same goes for fathoms. It is a much better measurement of depth because nearly all water bodies will have ripple or waves and waves of 6 feet make an error in depth of 1 in fathoms or an error of 6 in feet.

Turns out 6.3V seems to be efficient for both vacuum tubes and regulators!


The references in the comments suggest ChatGPT as providing this effect. But that is (or should be) unlikely, the "training" or moderation (tweaking?) should actually solve this problem. It should be relatively easy to separate it's own generation from sources. BUT where it will happen is when multiple instances of these language models compete with each other. ChatGPT quoting Bing or Bard output probably can't be reliably countered with internal training of ChatGTP, and the same goes for Bing & Bard and all the other myriad manifestations of these data mining techniques. (Unless they merge them togther?)


> BUT where it will happen is when multiple instances of these language models compete with each other.

That's what everyone else is saying already. Not sure what exactly you are arguing against.


Sorry a bit late replying-been away. It is not the competition that is bad, it that anything produced by them cannot be tweaked and so become "circular" sources. There will be no way to test for "truth", at least on an individual bot the training data can be tweaked to not use its own production as source. The competition will make the validity or "truth" of most data questionable. I guess it should be possible for an individual LMM to be trained for "truth" (reality?) but it becomes almost impossible for a LMM to discern truth when the sources it is analyzing are of generated by another LMM


Hang on a bit. If this website should have a "look and feel" like other websites then what is the point of W3C? The stated aim as I read it is to make the web more consistent. But we need diversity, that is where progress comes from.


Google/Firefox/Adobe Flash/React/Angular/etc innovate. W3C just attempts to keep them consistent. They're a standards committee.

"W3C develops these technical specifications and guidelines through a process designed to maximize consensus about the content of a technical report, to ensure high technical and editorial quality, and to earn endorsement by W3C and the broader community."

I'm not addressing if I like the W3C or if I think we'd be better off if W3C was more diverse/progressive. I'm just saying that if you judge them based on their own stated goals, their website's design fits the mission.


Well i am somewhat interested (cynical?) of W3C itself and this website does not at all detract from my opinion. I find the layout almost childish and "waffle centric ", the type of website you have when how it looks is more important than the message. (or you want to gloss over any deep consideration of the message?)

I guess W3C is so broad in concept that it is very hard to be specific without turning users off at first visit.


And a related question is do any/all of these programs use other copyright material as their training data and is it a breach of copyright?


Sovereign airspace seems to me should be a bit like 12 mile limit for oceans or 200 mile economic zones for oceans. So how far up? 12 miles? If it is a segment the shape of the continent + 12 mile ocean limit, does it reach out into space? Once the air becomes thin (and cold) enough to not support life is that the limit? Apparently you can have balloons in the stratosphere, is that sovereign airspace? These signals a balloon can pick up it they are important surely they will be encrypted. I seem to remember in the sixties or seventies there was a picture of a Russian reading a newspaper in red square and you could read the newsprint. If satellite technology hasn't improved in the 50 years since, I would suggest that neither has balloon spying. How many mega pixels is your phone?


It's the ability to pretty much hover for a while that makes it different. Considering how fast recon satellites fly past overhead (literally a single blink to cross your entire field of view), according to those green laser videos that have been showing up lately), they can capture a lot more targeted info with a balloon, and quite cheaply for that matter.


I worked for a while in a fundamental neuroscience research environment. Basically there was one supplier of VERY expensive equipment in the field. But research groups that shelled out lots of money were now restricted to only research subjects the machine could do. This actually change the focus from fundamental exploration research to directed research. I think ChatGPT is the same. It will limit what most people believe AI is and what it is used for. (On a basic level isn't it just data mining on a grand scale?) The fundamental problem of "truth" is not considered important in the hype. If it can't deliver anything that you know absolutely to be "truth" without having to verify it then it is just a shiny new toy. I think the headlines and hype are generated to gloss over the shortcomings of this field in general. (is it a sign of the times whats that other media generated hype that makes money ----blockchain?)


This is actually an interesting reply, and something I did not consider.

To me, the most impressive part of ChatGPT was not that it could give mostly correct answers to known problems. In a sense, internet search could do it already (just in a much more cumbersome way), with similar degrees of correctness.

The most impressive part for me was actually how seamlessly it parses and produces fluent natural language. Text generated by it reads like something a human would type.

So far I didn't try to fool it by purposefully asking something ambiguous (something that is a characteristic of natural languages), or ask about something that has an ambiguous answer to see how it handles it, but so far I'm impressed.

But I never considered that people may restrict the research of AI to language models due to the rampant success of this avenue of research. I hope this is not the outcome, but I wouldn't be surprised (i.e. the success of ChatGPT works as a blackhole for investment in the area, with everyone racing to cash in on it).


Accepting the fact that some people are just arseholes and it is no reflection on me. I used to worry that it was something I had or had not done that made them interact in the way they did. Now I will initially give them the benefit of the doubt, (bad day, tired, hangover) but if it continues I no longer interact with them and move on. I no longer even consider or think about them.


But is it really Artificial Intelligence or is just the production of logic algorithms based on data? I guess it comes down to what the definition of intelligence is. Arriving at totally new concepts from directions not pointed to by data or by conventional logic seems to me to be more the definition of intelligence. Having said that AI does present some spectacular results that would be impossible any other way. I think of it as data mining on steroids. But back to the original question. What I fear most is that a significant number of people will embrace it as a doctrine to the exclusion of all else. Believing what AI produces as fact is very very dangerous, but that seems to be where this technology is leading us. I remember years ago reading research on what was originally called "beam robots". Basically the premise was that if an organism (in this case a mechanical robot) was constrained to a finite set number of random movements then given enough time it could be said the robot has memory. It is a strange concept. But look for beam robots now and you find lots of information but it has diverged completely from the original concept. It was one of those concepts that was really hard to understand so the "new" proponents decided/changed the concept to something entirely different, but much easier to work with. AI seems to be doing that. If this is the case, should it become mainstream, then I think we will lose the advantage of intelligence. That is truly dreadful. (actually if you have to teach your AI any more than fundamentals then it is just learning by Data mining, its not intelligent)


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: