Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
I Built Linux from Scratch (thesloth.me)
207 points by imthesloth on June 27, 2023 | hide | past | favorite | 104 comments


I did LFS back when I was a pre-teen/teen. So late 90s, early 00s. And while I agree to an extent that it doesn't give you much day-to-day stuff, it really makes you more comfortable with operating systems and linux especially.

Nowadays, I am a devops/infrastructure engineer, and I can say LFS is a core foundational experience that has let me be better at my job. I know deeper inner workings of a Linux distro and that strong foundation helps. Yes there is so much more to learn but having LFS in your back pocket of lifetime experiences is great.

I also think its a 1 and done kind of thing. I don't see a huge benefit of doing it again. Maybe for fun, in a VM while waiting for other things to finish. It is also much faster nowadays. Back when I did it, I had a 1ghz, 256gb ram machine, it took me days. I could probably go through it in a day casually with modern computing, and especially since I have a threadripper machine, sending 64 threads at the make commands im sure will get my 1 BU to be a very small number.

If you have a child that is interested in computers, tech, programming, etc. I highly encourage this activity. Could be a fun bonding activity too.


I almost did it in 2008ish. Back then my country had a severe electricity crises, and in the summer we would get alternately one hour of electricity and one hour of blackout.

So I had to try and break the LFS build into one hour chunks. Though I got through quite a bit, unfortunately, there were some multihour build steps and I never got lucky enough with multihour electricity, till I finally lost interest.

I did learn quite a bit from reading the manual though.


I had the same experience around the same time (though I'm a bit older). LFS really helped me understand the relationship between the bootloader, kernel, init system, and all of the various daemons and tools that make up a complete system.

From there I moved to Gentoo, and then eventually to Ubuntu and other batteries-included distros. But the knowledge I gained from LFS is still a foundational part of my skillset, even though my roles have been more dev-focused than ops/infra.


Somehow LFS didn't help me understand the various subsystems... all I remember is long sed patches and compilation logs

It did teach me how subtle a running stable OS is underneath though, as, after my first standalone boot, I could enjoy a partially working TCP stack: elinks could browser, curl couldn't, irssi sometimes, all of this with random terminal display codes being inserted.


Until I moved to FreeBSD in the mid 2000s I ran Gentoo primarily. Even though it had a bit more hand holding than LFS, it still taught me a lot and getting to choose implementations allowed consideration of design choices (cron, mail, syslog, etc.).

I’ve tried to return a few times but it’s a bit different now and I’m happy enough with FreeBSD that I don’t have much place for it. I do wish more embedded distros would have started with Gentoo (specifically ARM SBCs).


I think one of the advantages of Gentoo or LFS is you get an “introduction” to a bunch of the various parts, so that when you need to change something in the future you know where to start looking.


IMO a secure server is an immutable appliance. I still build Linux from scratch most weeks as a core part of my job.

Bare bones hardened kernel + shim init + target application are all you need, and will be your highest security/reliability systems.

And yeah, threadrippers are a must.


I assume you compile the source code because you want to be sure you don't use any compromised binaries? But how can you be sure the source code wasn't compromised with some obfuscated C code? (Honest question, I'm just a humble application developer.)


It is dramatically easier to hide malware in a compiled artifact than in public source code, not to imply that the latter does not happen.

In security focused orgs though you review all code yourself with the exception of things with extensive third party signed review such as the Linux kernel itself. Even then I review codepaths in the kernel critical for my use case such as random.c

From there, if I -alone- compile containers, kernels, or binaries, someone could coerce me to tamper with them to compromise all downstream users. Same if there was a central build system I can access. To mitigate this I ensure my artifact builds are deterministic, sign my changes, and have team members review my changes, reproduce my artifacts bit for bit, then counter-sign the results.

It is never wise to be in a position where there is possibility of you yourself tampering with things that control anything of value, or else someone will coerce you to help them steal said value.

As a security engineer it is my job to ensure no one ever has to trust anyone, including me.


Typo in your homepage: "Continuious Integration"

Interesting thread!


Good catch!


While I do not build LFS regularly or for production use, the security improvement typically comes from the fact that the end system is _super_small_ and focused. Less software means less attack surface.

Sure, compromised binaries are nasty but personally I do place quite a lot of trust with the distribution repos.

(PS, if you are reading this and contribute packages to distribution repos: Thank you!)


You can never be 100% sure. Even the compiler, firmware, or hardware could be compromised.

Security comes down to reducing attack surface, ideally to an infinitesimal degree.


> And yeah, threadrippers are a must.

I mean, they give a wonderfully higher quality of life for compiling but they are $2,500-6,000 just for the CPU. Of course this can be done on much lower-cost CPUs! A 12-core Ryzen is already at the "I have lots of money to spend" end of what I'd suggest for something like this.


If you are compiling Linux kernels for a living, odds are near 100% your time waiting on compiles costs more than an employer buying you a beefy CPU.


Sorry, by "something like this" I meant LFS. Obviously the word "this" was ambiguous as GP said "I compile linux kernels for work".


As in manually or https://buildroot.org/ ?


I used to make heavy use of buildroot but these days for most of my use cases I just have a kernel, and an init/application binary that I statically compile into the kernel. A simple makefile gets the job done and is easier to review than the whole of buildroot.


what happens if the program crashes? with no init there is no way to restart it?


Same, though a bit later, circa 2007. It's great for software developers too. It's a good quick intro to many different build systems & philosophies, and shows you how the different pieces of software connect together by standard paths and package management tools like pkgconfig. Being able to quickly dive into a new project, understand how to build it and hook other software into it, debug build problems, and make quick hack changes is extremely valuable for developers.


Yep me as well sometime in that era. It was very interesting and informative. I'd never use it for an everyday system obviously but was a good excercise. And I did this on my real machine before VMs were really a thing :-)


> I had a 1ghz, 256gb ram machine

You had a 256gb ram machine???


Ha, I wish, good catch on the typo, I can't edit, but should have been "mb"

It's been so long that thinking of ram in anything other than GB is weird.


How does it compare to install Arch linux without an installer? I remember that it took me a day or two of tinkering, when I did it for the first time (circa 2009) and I learned a lot. As of today, Arch remains my distro of choice.


It's very different. With LFS you build your own distro. You bootstrap the filesystem, build your kernel and everything else from source, and there's no package manager. That means manually managing dependencies, which is a nightmare in and of itself, and something we take for granted with Linux distros.

The LFS book is quite thorough, so you technically only need to copy/paste commands, but you learn _a lot_ about Linux in the process. I highly recommend it to anyone interested in Linux. I did it once in college, and parts of BLFS IIRC, and it's one of the few memorable projects I got from my degree.


Sounds like a valuable formative experience.

Sort of like launching a spaceship in factorio.


I built LFS like back in 2005 or something. I remember it took forever. I wonder if it's faster or slower now. My computer is definitely faster, but I also have a creeping suspicion the code has gotten a lot bigger. Then at some point I switched to gentoo. I remember I vacillated between gentoo and slackware. The important thing was that I had an onion on my belt, which was the style at the time.


Significantly faster. A few months ago, after building a new rig for the first time in 6 years or so. I did a gentoo build from minimal to gnome, as exercise/burn-in/cooling testing. With the amount of RAM and cores available now compared to 2005, an 'emerge -e @world' only took a few hours vs the days and days it took back then.


Same, I used LFS for a couple months in college until I got annoyed about manual updates and switched to Gentoo. And back in those days you had to do it on actual hardware, not a VM. I had a job interviewer at the time who asked which Linux distro I used, and it was fun to tell them about that.

I would definitely recommend going through LFS to anyone maintaining Linux systems, it really helps you understand how things work.


It's almost certainly faster if your computer is remotely modern. Compilation scales pretty well with multi-core performance.


I did it in the early 2000s and last year. It's a bajillion times faster now.


I wrote this a few years ago, when LFS made the front page:

I had a lot of fun doing this. You really get a feel for the evolution of build systems -- from older software that uses automake/make to newer programs that use meson/ninja/cmake etc. It was also cool to learn how to bootstrap a bespoke set of development tools tuned for your hardware.

It took me a solid weekend to get everything built. I was able to get a basic LFS system built on a Saturday, and on Sunday I did the "Beyond Linux From Scratch" edition. At one point I got stuck trying to debug a weird interaction between systemd and PAM that took me a while to unravel. That was humbling, I thought I knew just about everything about Linux, but turns out there are large areas where I just don't have a clue.

The docs are well written and maintained, so there wasn't a lot of frustration there. Even if you're not an old hand at Linux you can likely get pretty far by just diligently following the instructions.

I struggled a lot more trying to make a decent desktop environment than I did getting the OS setup. I spent so much time trying to get a nice-looking toolbar (polybar) and basic stuff (like how patched fonts work) took me an embarrasingly long time to sort out. I also didn't know what a compositor was, or why you might want one. I enjoyed figuring out the basics of compton, which allowed me to get cool transparent backgrounds on windows[0], although I never did quite figure out how to get rounded corners.

[0]: https://muppetlabs.com/~mikeh/spudlyo.png


I did LFS back when autoconf/automake was considered a modern way to build software.


For something that's more of a prepackaged build-your-own-Linux kit, there's also KISS Linux[0]. It's kind of a microdistro with minimal abstraction over the raw guts, and "packages" are just pre-downloaded source code repos that you compile yourself.

The "package manager" is just a shell script. The installation process[1] is entirely manual, so you control every step as you bootstrap up to building your own kernel and installing each subsystem, all the way up to compiling and running Firefox. It's pretty neat.

[0] https://kisslinux.org/

[1] https://kisslinux.org/install


unfortunately its been maintained by only one dev which seems to have turned its back onto tech and has fanished from the web entirely[0]. I think its been forked since and maintained by a small community[1]. I liked the concept and used it for some tiny test machines.

[0] https://www.reddit.com/r/linux/comments/m4pwix/what_happened...

[1] https://github.com/kiss-community


There's also TinyCoreLinux which is amazing when being bundled with busybox, for both embedded and VM use cases. [1]

Though these days I would probably recommend to go with Yocto [2] for the sake of stability and updates.

[1] http://tinycorelinux.net/

[2] https://www.yoctoproject.org/


If you need something more practical, but still challenging, try Slackware.

Use the advanced install feature and only select the packages you need. Build the other software yourself. You can choose your own difficulty this way. You can also follow the distro's way of packaging up software (build scripts), or build it the software developer's way. The main advantages of building software yourself are:

1. You are a developer who wants to customize the software.

2. You want to practice.

3. You want to contribute to a project.

4. You want to have a better understanding of what the code does.

Slackware 15 is a modern platform to build upon.


Looking back picking Slackware as my desktop OS 20+ years ago was such a great career building move. I ran Slackware plus an obscure and now defunct source based package manager for 10 years and the experience and mindset has stuck with me ever since, even if I don't use Slackware anymore and newer ran it in a professional setting.

Slackware doesn't try to hide anything from you, it takes the long way around, but it does so in order to do the correct thing, and in a way where it's clear to you why things function the way they do.

In some ways Slackware is hard to use, but it's also less frustrating and you don't hit exceedingly hard problems where you feel like the OS is fighting you.


Came here to mention Slackware, my first distro and introduction to linux back in the late nineties. I thought they had abandoned it , glad to see it's still around.

IceWM gang anyone ?


The vanilla Slackware has been a perfect distro for me. No systemd; easy to keep up to date with slackpkg and sbopkg, without the dependency hell. It has been a daily driver for me both on a Thinkpad and an (oldish) Mac Mini for years now.


It seems to me that slackware + flatpak could make a _really_ stable base for a desktop, but I've not got deep experience with either so could be making stuff up.


Nah https://buildroot.org/

Slackware was underdeveloped PITA back then and it is now


I love Buildroot for unashamedly being the peak Makefile.


Never heard of buildroot. Interesting!


I haven't run Slackware for maybe 20 years, but I remember running 7.2 at uni and recall really _enjoying_ building everything and having that level of control. It taught me so much about how Linux works. Conversely, I remember finding my initial experiences with dpkg and rpm quite frustrating in comparison.


dpkg had a bit weird experience, because the "easy" way of building (binary)package is literally "just put files that need to be deployed in one directory, and some metadata in the other and run a command to package it", i.e. extremely easy and entirely independent on how you build it.

But the "proper" way includes million scripts and checks that make sure your .dpkg is "distribution grade" and it can be quite complex


Even more practical; Buildroot


I highly recommend building an LFS. It was one of my first big personal endeavors with computers. In the article, the author says they didn't learn stuff that'll _necessarily help them in their day-to-day_, which I can echo, but I definitely learned a lot about different commands, the file structure of linux, `hier`, changing root, cross-compiling, toolchain building... it's really an awesome and fun process, and is especially satisfying when you manage to boot into it for the first time. I will say I'm much less fearful of the GRUB CLI than I used to be ;)


Yeah i totally agree with your comment. Just to clarify for anyone reading this - it was not my intention to downgrade the value of building an LFS, but rather i wanted to set a realistic expectations on what you can expect to learn, so that you don't do it for the wrong reasons.


I deployed ipcop[0] routers a few times.

Ipcop was a pretty good router. It could be booted off a 1.44MB floppy disk. It was built using LFS, so it took a while to install on a home-grade machine (like, a day!). This was ages ago; apparently they're still going.

I didn't learn anything from installing it, except that it's possible, as a user, to build the whole toolchain, the OS, and the application, starting from assembly language. And that using LFS, you can make a really tiny Linux. Making a router was a good application of LFS; I'm surprised the search engines have forgotten it.

[0] http://www.ipcop.org/

It's important that we can always do that.


I did this back in highschool (so 1ate 90s). It was a really fun experience but I don't think I learned as much as I had hoped. I learned a lot about bootstrapping a system but not how to maintain it. It turns out package managers are really important for that.

I ended up using the system for a few months until it collected enough cruft that I started over with some other distro.


Flamebait, but:

> Slackware and LFS are the Haskells of the Linux distribution world. People jump to the extreme end of the spectrum, and either get burnt or remain unproductive for life, when they should have just used OCaml or F# instead.

https://blog.nawaz.org/posts/2023/May/20-years-of-gentoo/

I've done both LFS and Gentoo. While LFS is certainly fun, in practice I don't think you really learn that much more than with Gentoo. The benefit of the latter is it's easy to stick to for life.


I built an LFS system during one weekend my first semester of freshman year of college. I had too much free time I guess...

At the time I made it a 32-bit system since LFS didn't (doesn't?) support multilib and I knew I would need some 32-bit libraries.

I used that as my main system for quite a long time, upgrading software or installing based on BLFS or my own intuition as necessary. It worked pretty well! It was an invaluable experience in the development of my Linux expertise.

After about 5 years I got frustrated with the 32-bit system so I did an in-place upgrade to 64-bit. It was thrilling to come out the other end of that, to say the least (seriously). The training wheels were definitely off, but LFS had educated me enough to be confident in doing it. Also I kept around all the 32-bit stuff of course, so I could incrementally upgrade things.

After a few more years (maybe 2018ish?) I grew weary and changed to Arch (now I use void) :)

All that being said, I highly recommend LFS!


Some 20 years ago I went and built "Linux From Scratch". I think this (among other similar exercises I did) enables me to find my way pretty quickly around any new or old tech thrown my way.

I highly recommend this to anyone interested in 'computers'.


After I finish the whole process, can I slap pacman on top my freshly build LTS and effectively transform it into Arch and daily drive it? Or is it only meant "for the journey" of building it, but can't realistically be used as a daily?


It's not really intended to be a daily driver once you've built it. There is "beyond linux from scratch" that extends the system but.. normal distros with normal package managers are just infinitely more usable than a LFS system. It really is just about the journey and learning how it works from zero.


I have used NIXOS_LUSTRATE (wipes the other Linux on the next reboot) on several systems where it was not supposed to be used. Once you build nix, you should be able to boostrap NixOS from there.


Why do that when there is pacstrap?


Because I want to experience LSF?


I spent about two weeks in the early 90s with an unholy collection of DOS, 386BSD and Minix futzing together a bootable linux system on a 386/25 with 1M of memory. A couple months later someone told me about this thing called "SLS." I am definitely a fan of a decent distro.

But yes... building Linux from scratch can be very educational. It's probably a right of passage for jedi geeks: LFS, designing and spinning your own SBC, using wireshark/ethereal to debug a borked Cisco router. Sadly, I still remember much more about LILO than I ever learned about Grub.


Recently I was making an embedded linux image for enclaves where a shell and shell utilities did not make sense and high security and auditability was needed.

I learned a bit of rust and wrote a minimal init system perfect for my use case: https://github.com/distrust-foundation/EnclaveOS/blob/master...

The init system is statically compiled into the kernel as a CPIO.

This is about as bare bones as you can get with linux, and may help others understand the essentials.

I was only able to do this because of years of running gentoo and building linux for various embedded projects. It pays off!

You could swap out my init binary for busybox init and have built a full interactive linux distro from scratch in under an hour.


I did it in 2001 and it was one of the most educational things I've ever done. I think there should be a college course which gives you a box of computer parts at the beginning of the semester and at the end you get the grade displayed on a web page hosted on that computer built entirely from source.

I have some precedent for that kind of thing, back in 1987 I made a deal with a prof that as an independent study course a friend and I were going to build a pair of voice synthesizers on IBM PC Prototype boards using the synth chips available at Radio Shack and he'd give us the grade my computer verbally asked him for at the end of the semester. I had no backup plan and it was a crazy risk, but we got A's and the teacher regretted not having us make one for him as well.


I almost did it but it really didn't like that I was using NixOS as my host so I built a package manager instead https://git.sr.ht/~nektro/wifilylinux


Linux from Scratch is educational but fundamentally impractical to use.

I recommend using embedded Linux instead, e.g. with Buildroot. You get an understanding of the fundamentals of the hardware, kernel, and build toolchain. And you can create small hardware systems that do fun things.

I ran Gentoo for a few years, and enjoyed it. Doing the install from stage1 gives you a similar understanding of how the pieces fit together and enables you to fix low level system problems, e.g. with disk failures. The community is also smart, as it self-selects for people who have skills and good attitude. Unfortunately Gentoo is too slow to install and quirky to deploy on servers.


I remember this being a phase I went through when I was in school.

Back then I ran into the same issue as everyone else, it basically felt like Slackware, or FreeBSD with the ports but sans all the handy patches.

I'd love it if their docs[1] actually mentioned how to install an existing package manager software on your LFS, that would basically make it a distro.

1. https://www.linuxfromscratch.org/lfs/view/stable/chapter08/p...


I'm reminded of this in the very early days on my blazing fast 486... where you would go to build the kernel and walk away for two days waiting for the next compile failure.


I did it few times. It's so much easier nowadays with https://buildroot.org/


For those who've been through LFS, would you say that it's a good first step for someone who wants to roll their own bistro at some point down the road?


> For those who've been through LFS, would you say that it's a good first step for someone who wants to roll their own bistro at some point down the road?

<insert joke about the similarities between making sandwiches (bistro) and building a linux (distro)>

Yes definitely! If you want to roll a new distro, I would dare say LFS is an essential step. Now that said, if your new "distro" is just going to be Ubuntu with a couple tweaked settings, you won't get nearly as much from it. But any non-trivial distro rolling you can absolutely benefit.


Haha, autocorrect strikes again!

Thanks, and yeah if I were to roll a new distro it’d be substantially different from anything currently out there.


There's the LFS Project[0] route and then there's building things on top of the latest Debian or Arch releases. All are good learning exercises. The only real hurdle is installing drivers for things like Printers, Wi-Fi adapters, graphics cards, etc. That can be a headache-and-a-half.

[0] https://www.linuxfromscratch.org/lfs/


Beyond Linux From Scratch (BLFS) is helpful with the latter.


I like LFS, and I think it's a good teaching tool, but probably most people on HN won't get much out of it. It's like building a car from a kit: You don't know why you're connecting the belt between generic oddly-shaped box A and generic oddly-shaped box B. I've thought about making an LFS where you use a bare kernel image and write a simple init process, with busybox as a backup.


I got to do an independent study project junior year of highschool where I build the LFS project and went on to write my own (super basic) source based package management system. Was hands down the most valuable learning experience I've ever had. I highly recommend going through the process once if you want a better understanding about Linux/operating systems/compilers/toolchains!


I did one based around LFS a year or two back, but I used the Heirloom pkgtools stuff to publish packages in SVR4 format just for a laugh. I even packaged up and built CDE for it, it was a real fun parody of Solaris and other SVR4 systems. I called it HeadRat Linux. It really did add a nice spicy SVR4 style coating over Linux which I thought was kind of hilarious.


It's a waste of time. There is a reason why this is all done automatically. This is one of the things I think about when people say that "Linux is only free if you don't value your time."

Even if you try and argue it is educational. Learning how to build programs does not require LFS. LFS is inefficient as a learning tool


I did it multiple times in past in a few companies where I worked on various Linux based network security appliances. Using standard Linux distros on them is problematic for various reasons.

So I developed my own build system, which probably looked somewhat like Nix for building the distribution and managing updates.

LFS was great place to start with


I started doing it back in 2019 after college, but at the same time I was doing job interviews with code challenges that would take 2 - 3 days, so I end up giving up halfway through.

It was a miss calculation on my part, I underestimate how long it would take me to complete the whole LFS thing. Still on my todo list though


I built LFS in the early 2000s and I must say that I felt pretty much the same way. Compile times were pretty high for some of the items and it was fun to bootstrap the entire system. I learned a lot but I don't intend to do it again. I don't really need the knowledge these days.


I tried this twice, first time failed in stage 1... second time I got it all built but didn't build the kernel or boot loader...

Its certainly a learning experience, but I'm happy with Gentoo thanks.


Wait, no log or anything besides the sharing with the world that he did it?


Is the LFS faster than a regular distro? If the compiler takes into account your specific CPU and applies the available optimizations, it should be, right?


Could someone here please let the linuxfromscratch.org admins know that you can't access their site from Starlink?


I did LFS long back and replaced its default terminal as nodejs terminal. It was fun and learned so much from it.


> replaced its default terminal as nodejs terminal

What do you mean? Did you replace the login shell with a NodeJS interpreter or did you use a terminal emulator that ran on NodeJS?


The first process after kernel boots up was nodejs terminal. So when my machines boots up , i would see nodejs terminal, intead of bash.


I developed professionally on top of a LFS system in the early/mid 2000s.

In retrospect it was insanity layered upon insanity.


This link contains no additional information over the headline. He says he installed LFS.

No need to click on the link.


I did LFS long back and replaced its default terminal as nodejs terminal.


Do the hashes match? Is it 'wrong' if they don't?


Hi, in which context are you referring to the 'hashes'?

If you're asking about the hashes for required packages in 'Chapter 3. Packages and Patched', the hashes should match if you're downloading from the provided mirror - https://www.linuxfromscratch.org/lfs/mirrors.html#files

Downloading from these mirrors ensures that you have the exact version needed for the build, in any other cases you run the risk of the system not working as expected / documented in the book.


Nevermind, I thought this was about the Linux kernel.


If you have children, a much better investment is to have them install LFS and forbid them from using Windows/Mac, compared to going to college. I did not learn a damn thing in college because I'd been using Linux since age 12.


True. Building Linux from scratch will teach you everything in a modern CS curriculum, including the `cut' operator in Prolog, Boyce-Codd Normal Form, and alpha-beta pruning. \end{snark}


Could someone explain to me USian school schema pls ? Especially what difference collage vs university. For comparision in Poland we have schema (+- 1 year):

- 0 - 6 preschool

- 7 - 15 basic school (mandatory)

- 16 - 19 middle school

- 20+ university or polytechnique

And middle school generally divides into: a) 3 year of craft school, workforce after; or b) 4 years of general learning school, no craft knowledge after, expected to go to university after or be clerk, or ..., unspecified; or c) ~4 years technical school in specific craft with "qualified technik" diploma quasi low level "engineer" (have some knowledge to be responsible for things), maybe polytechnique after or uni

At unis typical degrees: master, phd, professor.

So where in such schema collage is located ?


College is tertiary education, ie university or similar. My understanding is that the distinction between college and university is an organizational one, not the curriculum. But I’m not from the US.

What you call middle school they call high school.



All of those "very" "useful" in huge variety of CS related jobs /s


Perhaps you missed the point of college if all you did were things within your comfort zone


I would guess GP meant "about Linux."

I had the same experience with college. I'd been using Linux for years and learned nothing new about Linux in college. It was nice because the class moved fast and a lot of people who had never Linuxed or CLIed in their lives really struggled to get through it and keep up.


Maybe they missed your point, but plenty of people with expertise go to college in order to get a piece of paper that allows them to work. Making it a middle-class spiritual rite of passage isn't the only way to do college right.


They could've taken other classes then? Lift the lid a bit, learn about computer architecture, VLSI, transistor physics, fabrication, analog devices, MEMS, antenna design. If you already know a lot about computer science, it's unsurprising that you wouldn't learn too many radically new things just computer science classes, but you don't have to do that. Stay curious and seek out things you don't know.


If you approach college or any education with the attitude that you know everything already, you will likely learn about as much as if that were actually true.


I have a similar background and got a lot out of my CS program, its about what you put into it. Sorry you had a bad experience, that can absolutely make it harder to invest.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: