I'm not sure I can answer that without confusing things more, but I'll see what I can do. In an MOS transistor, the gate is insulated, so there is no base current, just voltage. This voltage controls the transistor.
In a bipolar (NPN or PNP) transistor, the current through the base causes a larger current through the collector, amplifying by the beta factor. So the transistor is amplifying current. But the current depends on the voltage between the base and emitter, so from that perspective the voltage controls the transistor too.
Whether you're amplifying current or voltage depends on the circuit, so I can't give more than a handwaving answer.
In field effect transistor (in which the actual physics involved are at least for me simpler to grasp) the gate is isolated from everything else and voltage at the base directly changes the geometry of the conductive channel between the S and D pins. In effect the gate voltage directly influences the resistance of the component. MOS is an name for particular practical realization of this mental model.
In bipolar junction transistor (ie. PNP/NPN) there are two diodes that are positioned just so that conduction of one of them influences the other in such a way that when one is positively biased the other will conduct even when reverse biased. For the typical BJT these two diodes have significantly different construction and thus there is difference between emittor and collector, but the effect works both ways (and in fact many circuits will somewhat work even with the 2N3904 connected the wrong way around). The effect is also caused by change of properties of doped semiconductor material in response to electric field gradient but (at least for me) there is no directly applicable model involving discrete lumped components changing their parameters in response to external stimuli that matches the underlying physical principle.
> I understand that transistors amplify current but how do they amplify voltage
Any voltage through a resistor will produce a current, and any current through a resistor will produce a voltage.
By properly connecting resistors at the transistor base and collector, one can turn a driving voltage into a current and the collector current back into an output voltage. The basic common emitter transistor amplifier circuit is a good example as it shows how a current amplifier like a transistor is used to amplify a voltage. Resistors are the secret that allow all permutations: voltage to voltage, current to voltage, voltage to current, current to current.
Transistors change their effective resistance based on the "control stimulus", which in turn changes how much current can pass through them.
In BJT transistors the "control stimulus" is the current flowing through the base pin, while for (MOS)FET transistors it's a voltage potential between the gate and source pins.
The amplification happens because in the right region the change in effective resistance is high for small variations in the "control stimulus".
If you drive a BJT with a resistor in front of the base pin, you can drive it with a voltage. If you put a resistor between the gate and source pins, you can drive a MOSFET with a current source.
So the way you control them is different, but what they end up doing is the same. And by using a resistor you can effectively change the way you drive them.
Your DC battery or power supply provides your headroom, and the transistor Base or Gate senses a small increment of that voltage and can sometimes deliver as much current as it takes to push the voltage across a resistance or impedance right up to the rails.
Yeah dude, I think so too. Some of the ideas I liked though we're about prescription, and then him saying it's all about destination. And trying to look beyond words and towards an understanding.
And I agree, most of it can be summarised as "life is a complex system. You can't marry any kind of ideology if you want to achieve any goal, but instead keep listening to feedback and tune accordingly"
I haven't come across any, but I'm not surprised since it makes them rife for abuse. There are providers that accept payment in bitcoin however, mostly seedbox providers. RapidSeedbox comes to mind.
Another tip: Prior to going to a shopping website (such as Amazon), I go to my Firefox browser settings, and I disable all images. This keeps me from buying useless junk via addicting recommendation algorithms, and removes the temptation very well. I also never go to one of these websites, unless I know exactly what I am planning on buying. In other words, I make a paper list of things I need to buy before I get onto one of these sites.
That plugin is for desktop browser. My biggest distraction is the phone actually. And firefox for phone doesn't support it yet. Somebody should really port it. Grease monkey also doesn't work on mobile browsers.
What a great idea. I have myself disabled colors on the screen to avoid those addictive contrasts.
I also try to do a phone fast every Sunday, where I give my phone to my wife and just live without it for a day. It's amazingly rewarding. You should give it a try :)
> I also try to do a phone fast every Sunday, where I give my phone to my wife and just live without it for a day. It's amazingly rewarding. You should give it a try :)
Amazing :-)! You know this is exactly the correct way, to reset your brain, to be motivated, so that you can work hard! I will definitely give it a try! Thank you :-)
> That plugin is for desktop browser. My biggest distraction is the phone actually. And firefox for phone doesn't support it yet. Somebody should really port it. Grease monkey also doesn't work on mobile browsers.
I do not know if this is helpful for your situation, but I keep my laptop [Windows 10 Professional] always on. I always keep my Synology NAS on. I usually remotely access these devices on my iPhone or my iPad. I prefer the iPad though. I essentially VNC in, and I then get on the internet via Firefox Desktop with extensions loaded.
Sometimes the remote connection is not an ideal situation. I keep a Rock Pi X, with a Windows 10 variant loaded on it, with me, when I am out and about. I have it configured with the Desktop browser extensions that I like to use and I basically VNC in: https://liliputing.com/2020/10/the-59-rock-pi-x-is-like-a-wi...
A smaller form factor like a Raspberry Pi Zero with an Ubuntu variant may be a better option, though.
Let me chime in on the voting issue (or I think what you mean is mob voting issue?)
Imagine a discussion on Linux and Torvalds comments on it. But due to randomisation, it doesn't get enough traction. Note that this problem will become bigger the more top level comments you get.
This is the same problem with democracies also. Everyone gets to vote but the outcome might not be the best.But the alternative of randomly selecting people to govern also has its problems.
Do you know any platforms which have successfully done the randomisation thing?
> Imagine a discussion on Linux and Torvalds comments on it.
This seems like a rarity. The vast majority of HN discussions don't have this situation, and it seems odd if the only purpose of HN voting is to upvote "celebrity" comments. There are much better places to follow the comments of celebrities than on HN.
> the alternative of randomly selecting people to govern also has its problems.
I strongly believe this is actually the least bad form of government, and vastly superior to elections, which are glorified high school prom royalty pageants.
> Do you know any platforms which have successfully done the randomisation thing?
No, though I have no idea what "algorithm" Twitter uses to determine the order of replies in a thread. (Probably not totally random.)
I didn't mean celebrity but more like knowledgeable people. A random system, just like random election, does not guarantee best or betterness in any form. It's not important however, because I agree it's unlikely and I do see where you're coming from (I also share your sense of cynicism about democracy).
I don't use Twitter that much but I hear that twitter is really toxic. Assuming some part of twitter tweet section is random, does that inspire confidence that such a system might work.
> I didn't mean celebrity but more like knowledgeable people.
That's my point though. A celebrity like Torvalds will likely get upvoted, but in my experience, non-celebrity knowledgeable commenters often get downvoted by people who are much less knowledgeable.
> A random system, just like random election, does not guarantee best or betterness in any form.
I don't think any system guarantees betterness. :-) But random seems to be at least pretty fair and least subject to abuse.
> Assuming some part of twitter tweet section is random, does that inspire confidence that such a system might work.
There are different parts of Twitter. The Twitter timeline is definitely not random. It's either reverse chronological or "algorithmic", depending on your settings. But any given tweet can have any number of replies, and I don't know how Twitter determines the order of display of replies to a tweet. But it's overall a very different format from HN, so comparisons are difficult.
I see. My only contention would be that abuse is stopped but use is also equally degraded (due to randomness).
But I see what you're saying. Some combination of voting and randomness might br worth it. Also, another thing is maybe some sort of sentiment analysis can help (abuse mainly comes from trolling, virtue signalling etc).
I don't know, if ther was a way to figure out what value a comment adds (or inverse), then that, combined with voting and some sort of randomness might make the system fairer and better?
One could use Thompson sampling. Every comment starts with 1 up and 1 down vote. The total number of up/down votes determine a beta distribution. When displaying comments, draw from the beta distributions for each comment, and present them in that order. High quality comments drift reliably to the top over time, but other comments have their own chances at the top to accumulate votes and better determine their place in the stack.
Monoliths are also distributed systems and will run on multiple hosts, most probably co-ordinating on some sort of state (and that state management will need to take care of concurrency, consistency). Some hosts will go down. Service traffic will increase.
I understand your point. You are using distributed in the sense of "how is one big work distributed", you probably also hate overly "Object Oriented code" for similar reasons.
But distributed systems is a well understood thing in the industry. If I call you and you tell me this, then you're directly responsible for hurting how successful I would be by giving me a misleading sense of what a distributed systems is.
> But distributed systems is a well understood thing in the industry.
Wait, what?
Distributed systems are one of the most active areas on CS currently. That's the opposite of "well understood".
It's true that most systems people create are required to be distributed. But they are coordinated by a single database layer that satisfies approximately all the requirements. What remains is an atomic facade that developers can write as if their clients were the only one. There is a huge difference between that and a microservices architecture.
Distributed systems are well understood though. We have a lot of really useful theoretical primitives, and a bunch of examples of why implementing them is hard. It doesn't make the problems easier, but it's an area that as you say, has a ton of active research. Most engineers writing web applications aren't breaking new ground in distributed systems - they're using their judgement to choose among tradeoffs.
Well understood areas do not have a lot of active research. Research aims exactly to understand things better, and people always try to focus it on areas where there are many things to understand better.
Failure modes in distributed systems are understood reasonably well, but solving those failures is not, and the theoretical primitives are way far from universal at this point. (And yes, hard too, where "hard" means more "generalize badly" than hard to implement, as the later can be solved by reusing libraries.)
The problem is that once you distribute your data into microservices, the distance from well researched, solved ground and unexplored ground that even researchers don't dare go is extremely thin and many developers don't know how to tell the difference.
Correct. That doesn't make monolithic systems "not distributed".
Secondly, I don't know why you say "distributed systems are an active area of research" and use this as some sort of retort.
If I say "Is a monolithic app running on two separate hosts a distributed system or not", if your answer is "We don't know, it's an active area of research" or "It's not. Only microservices are distributed"
Most of what people call monolithic systems are indeed distributed. There are usually explicit requirements for them to be distributed, so it's not up to the developer.
But ACID databases provide an island of well understood behavior on the hostile area of distributed systems, and most of those programs can do with just an ACID database and no further communication. (Now, whether your database is really ACID is another can of worms.)
Different kinds of distributed systems have wildly different complexity in possible fun that the distributed nature can cause. If you have a replicated set of monoliths, you typically have fewer exciting modes of behaviour and failures.
Consider how many unique communciation graph edges and multi hop causal chains of effects you have you have in a typical microservice system vs having replicated copies of the monolith running, not to mention the several reimplementations or slightly varying versions and behaviours of same.
I don't even consider replicated set of monolyths as a distributed system.
If you've done your work correctly you get almost no distributed system problems. For example, you might be pinning your users to a particular app server or maybe you use Kafka and it is Kafka broker that decides which backend node gets which topic partition to process.
The only thing you need then is to properly talk to your database (app server talking to database is still distributed system!), use database transactions or maybe use optimistic locking.
The fun starts when you have your transaction spread over multiple services and sometimes more than one hop from the root of the transaction.
> Monoliths are also distributed systems and will run on multiple hosts
... not necessarily. Although the big SPOF monolith has gone out of fashion, do not underestimate the throughput possible from one single very fast server.
Well, no matter how fast a single server is, it can't keep getting faster.
You might shoot yourself in the foot by optimizing only for single servers because eventually you'll need horizontal scaling and it's better to think about it in the beginning of your architecture.
This is far from inevitable. There are tons of systems which never grow that much - not everyone works at a growth-oriented startup - or do so in ways which aren’t obvious when initially designing it. Given how easily you can get massive servers these days you can also buy yourself a lot of margin for one developer’s salary part time.
Even in a contrived situation where you have a strict cache locality constraint for performance reasons or something, you'd still want to have at least a second host for failover. Now you have a distributed system and a service discovery problem!
I mean, you can always deploy your microsevices on the same host, it would just be a service mesh.
Adding network is not a limitation. And frankly, I don't understand why you say things like understanding network. Like reliability is taken care of, routing is taken care of. The remaining problems of unboundedness and causal ordering are taken care of (by various frameworks and protocols).
For dlq management, you can simply use a persistent dead letter queue. I mean it's a good thing to have dlq because failures will always happen. About which order to procese queue etc. These are trivial questions.
You say things as if you have been doing software development for ages, but you're missing out on some very simple things.
Sounds like you're saying "Don't do distributed work" if possible (considering tradeoffs of course, but I guess people just don't even consider this option is your contention).
And secondly, if you do end up with q distributed systems, remember how many independently failing components there are because thag directly translates to complexity.
On both these counts I agree. Microservices is no silver bullet. Network partitions and failure happen almost every day where I work. But most people are not dealing with that level of problems, partly because of cloud providers.
Same kind of problems will be found on a single machine also. Like you'd need some sort of write ahead log, checkpointing, maybe optimize your kernel for faster boot up, heap size and gc rate.
All of these problems do happen, but most people don't need to think about it.
I'm not reading this as "Don't do distributed work". It's "distributed systems have nontrivial hidden costs". Sure, monoliths are often synonymous with single points of failure. In theory, distributed systems are built to mitigate this. But unfortunately, in reality, distributed systems often introduce many additional single points of failure, because building resilient systems takes extra effort, effort that oftentimes is a secondary priority to "just ship it".
Indeed. So with monolith usually we already have 3-4 (or more) somewhat reliable systems, and one non-reliable system which is your monolithic app. Why add other non reliable systems if you don't really need it?
Making a system to be reliable is really really hard and take many resources, which seldom companies pursuit.
I realized this one day when I was drawing some nice sequence diagrams and presenting it to a senior and he said "But who's ensuring the sequence?". You'll never ask this question in a single threaded system.
Having said that, these things are unavoidable. The expectations from a system are too great to not have distributed systems in picture.
Monoliths are so hard to deploy. It's even more problematic when you have code optimized for both sync cpu intensive stuff and async io in the same service. Figuring out the optimal fleet size is also harder.
I'd love to hear some ways to address this issue and also not to have microservice bloat.
I know we should not judge person and only ideas but it seems like all of Pinker's ideas are tok shallow for an academic. For example,
Also, Pinker wrote a book on how the mind works, which is very very wrong (my wife, who's a psychologist absolutely hates it).
She asked me to read a small article called "That's not how the mind works" by some guy at MIT to explain.
Then I've also read his book on "How to write" and as an avid reader, I'd say we'd not have had Shakespeare if we followed his advice.
And now you're telling me that his other books are also very shallow? Does he not realize it?
Just like all of the edge.org/Epstein people (if you’re unaware, Pinker is accused of raping a 15 year old girl and spent extensive time with Epstein), he’s all smoke and no substance. In fact the “third culture” group has proven to be amoral and completely corrupt.
Perhaps you read an excerpt… it's a 128 page book.[1] Also, Fodor was a Rutgers guy, out of Columbia and Princeton, though this was published by MIT Press.
Is there a setting in Firefox that allows me to invert colors without a plugin or javascript? The default Black on white is too painful for eyes.