Protip if you have good internet and don't want to configure everything locally: Jetbrains Gateway [1].
At the last company we had Docker, Java 8, Wildfly 10, Gradle 4 and I didn't manage to make it run locally on M1.
I was pleasently surprised with how smooth it was to connect to Ubuntu VM with Jetbrains Gateway. You use native Jetbrains app on your computer (Intellij Idea in my case), but everything is executed and compiled on the remote machine (Ubuntu VM). Another huge upside is that Docker is running on Ubuntu, which is a lot faster than on OSX. Downside is that your are dependant on the Internet ofcourse.
Officially the product is still in Beta, but it worked good enough for me.
Yeah, same. I don't get the point of buying an expensive and benchmark crushing piece of ARM hardware for some development work if it has the utility of a $200 Chromebook where you have to SSH into powerful X86 machines to actually achieve your development goal.
The biggest performance eye opener for me was when I opened outlook.office.com. On M1 its almost instantaneous where on my old mac (maxed MBP 2015 & Mojave OS) or Ubuntu Desktop machine (32GB RAM & i7 6700) it took 10 seconds or more (all done on the same network).
I noticed I fell out of the "flow" a lot less on M1, because actions that took a few seconds on other machines are now instantaneous.
Regarding the Intellij Idea and Docker, my old 16GB MBP 2015 Mojave was struggling hard and getting very hot with 100K LoC project. I would definitely use the Gateway on old machine aswell.
To be fair, both your 2015 MBP and 2015 6th gen Intel desktop are pretty outdated compared to your M1 Macbook, so of course they're slow in comparison, what else did you expect?
For a more apples to apples comparison, Intel 12th gen and AMD Ryzen 6000 series would also give you a lightning fast experience comparable to your M1, especially since modern systems come with faster memory and faster storage than your 2015 machines, contributing to the perception of speed.
Yes, hard to believe but that was seven years ago. In the old days it would have been like comparing a 286 to a 486. Intel has certainly plateaud since then but Apple's ARM kept some of that momentum.
One thing I'll throw out there is that I've recently been running IntelliJ using jdk 17 with ZGC as the garbage collector, and it seems snappier. (You can use the 17.0 runtime that jetbrains releases on github, this page has more related info https://mustafaakin.dev/posts/2021-12-08-running-intellij-id... )
We might have different definitions of "just fine".
On my M1 Max MBP Jetbrains Rider stars in under two seconds. Loading the quite large project takes another two.
On my previous laptop, an i7 MBP, it took in the order of minutes to get Rider to the point where I could actually start writing code with codecompletion. It sounded like a jet taking off while Rider blasted all cores at max to enable the smart completions.
The M1 Max isn't even slightly warm in the same case. Haven't found out a single spot where it would've even slightly stuttered.
On Big Sur at least there are dozens of daemons running that can't be turned off due to the read-only system partition. I'd check those and third-party software that could be sapping performance. Gets more important with older hardware.
Parallels works fine, VMWare Fusion provides a tech preview for M1 - I’m running Debian arm64 VMs this way. Both (AFAIK) don’t support x86 guests and don plan to (virtualization vs emulation). So you’d have the same issues as with as with Docker.
There seems to be a work around with qemu and UTM but I cannot speak to its performance or how viable this solution is.
Gave this a try a few weeks ago (for CLion with Jetbrains Client, IntelliJ for Java might be further of course) and felt it was still quite lacking. Loads of features weren't there yet. Of the top of my head: no way to use my customized keymap, no git blame annotations, fundamental navigation shortcuts missing (jump back, navigate via file structure), no way to attach the debugger to a process, ...
I went back to remote controlling the full-featured IDE via Projector, their earlier approach too remote development. That works extremely well for what it is. Probably the richer Gateway client will catch up in a while, but not quite yet.
I am able to attach to the remote JVM fine. However the roundtrip makes the QoL features like getting object contents a little slow. But it works!
The main issue I've seen is the gateway/target JVM combo gets wedged in some weird state somehow that persists across retarts and I found myself killing idea processes by hand on the remote server. But hey it's Beta! Works pretty excellent considering.
Yeah, that statement prompted me to try Gateway. It felt a bit premature then. The initial setup is a bit more involved with Projector, which may be a good reason to direct new users to the more recent offering.
Ah, very cool. For java dev I haven't yet felt the need to containerize anything, normally mvn clean install works and does everything needed. But for Python I've used remote interpreters through docker in PyCharm lately (since getting a python env installed with wheels & stuff properly is sometimes almost impossible).
VSCode have had some more luck with their frontend/backend architecture making it easier to pull off, glad to see jetbrains is on the move, as I prefer them.
YMMV, but I've had very good luck just using asdf for...pretty much everything, Python included. In a normal week I'll probably touch half a dozen environments--Node, PHP, Python, Java, Ruby, Golang, maybe sometimes dotnet-core--and asdf not only Just Works for me, but has done so without thinking about it for going on three years or so, when stuff like rvm/rbenv changed rapidly enough as to necessitate changing it up to stay on the same page as my teammates.
Our experience is gcloud and some other commands get messed up :/
Probably not asdf's fault, though. I've had multiple issues with gcloud not being compatible with my setup, spamming errors like /tmp/_MEIRZ3igG/libssl.so.1.1
But what asdf doesn't solve, though, is the setup for new devs. Python projects can sometimes take days to get running properly on a machine because of various differences, asdf solves some of it but replaces it with other installation steps instead.
That's what I like about a dockerized setup, if it works one place it works for everyone (almost, nix is probably better).
I can't speak to Python beyond the fairly minimal touch points I have with our projects, but Python anecdotally seems much rougher than it should be in many respects. But pretty much everything I touch (Node, Java, Ruby, dotnet-core) ends up being solved with asdf, a Brewfile for the Mac users and yum-and-xargs for the Fedora folks (me), in very short order. As far as new machines go, I have a shell script that I share with new folks (and my own version for a new machine) that just gets things done. (Used to use chef-zero, but that got really crusty.)
I like Docker for a lot of things. That said, on code inside containers is pretty awful, in my experience, and I'd much rather use docker-compose with something like Traefik to route outside Docker so I can run my service locally and everything works as I expect it. You can always tell a project that I work on because there's a bash script in there that fires up tmux with docker-compose, all services under nodemon, ngrok, etc. all good to go. ;)
We're moving our whole company to this product right now. Instead of buying (and upgrading) more and more powerful laptops for our (fully remote) dev team, we just get them whatever thin&light stuff they want, and a dedicated server with large RAM so that they can run the whole test environment easily. There are some rough edges here and there, but it's a game changer.
I wanted to transition to this development model a year ago, but unfortunately X11 forwarding was slow on mobile connections, and as far as I'm aware (if you know a workaround let me know), on Linux, RDP can't be made to just share a single app, only full desktop sessions.
We experimented with X forwarding, but it worked well only on e.g. local networks. Some people still ended up using X forwarding, but JB Gateway or VSC Remote is in a different league.
We have also tried Jetbrains Projector, which is basically a different rendering engine for Java Swing. A remote IDEA instance was rendering the UI through HTML, i.e. you could develop in a browser. It worked relatively well, but there were some issues around copy/paste, etc.
The only thing I've found to be half-ways usable is x2go. With it you can have a rootless window session. I still end up using VS Code Remote development tools a lot though.
Thank you for your suggestion, I just gave this a try, and it's almost there but there are still some rough edges. It needs to mature a little bit more.
Though this got me thinking a little bit, I have ordered a M1 Max 32GB which I have been waiting for over 5 months now. I am actually looking into getting the M1 air with 8GB as I think it will suffice for my Java development. I don't need to run VM's on that machine.
I'm gonna think about this through this weekend. I don't really need the firepower from the M1 Max as it has similar single core performance to the M1 air. And if this product matures, I will really not need that much RAM either.
You should definitely give the M1 MBA a try and return it if it doesn’t work well. Mine is only 8Gb RAM too, but I haven’t noticed paging slowdowns or any problems.
Same with Emacs + CIDER for Clojure instead of Common Lisp. Fully remote project setup on powerful server, accessed from local Emacs with TRAMP mode. CIDER commands (REPL included) work over the remote connection perfectly fine and run on the remote host, including automatic SSH tunnelling for the nREPL port if needed (for example if the remote host is behind firewall)
WSL2 is running a VSCode server, exposes a port to the host and the VSC client from the host connects to it. In theory you could also run the VSCode Server on a remote machine.
I think the "Remote-SSH" plugin is a better fit for a comparison though, but @vital101's comment is not wrong.
As an alternative I’ve had great experiences so far with GitHub’s Codespaces.
I couldn’t see myself going back to just running everything outside of at least some containerised environment at a minimum even if not fully remote.
I do a bunch of front end web stuff as well and now the thought of just running npm on my local machine gives me the chills. It’s akin to attending an orgy without a condom :)
Let’s just download thousand of pieces of random unaudited code that has some level of updates almost daily from people I’ve never met before and run it on my machine using an account which also probably has root access and where that same code can access any part of my filesystem and has zero meaningful restrictions on what it does with it… yes it is.
Here is the thing, maybe you don't need Docker to use Java, which was invented exactly to abstract the underlying OS and CPU architecture (and isn't the only one at that game).
Even worse, given that Docker and Kubernetes basically take Java/.NET application servers to other programming languages.
It's not so much about the Java code itself. When your Java app needs to talk to a database, it's convenient to be able to write integration tests against a running instance of the real DB. Ideally you'd also have a light-weight mock DB for unit tests, but some things only show up against the real deal. I guess you _could_ build that kind of test environment without containers, but I sure wouldn't want to.
If you're using an open source database like MySQL or Postgres it's pretty trivial to do this without containers. All you need is ~10 lines of SQL to create a new database and database user on your single local database instance. I don't understand why anyone would use a different database for testing and production. As far as I'm concerned, Postgres/MySQL are lightweight. I certainly have them running permanently on my machine and they use almost no resources when not in active use.
Now you need to git-bisect. And the range includes versions that ran in production with three different versions of your DB daemon (MySQL, PostgreSQL, whatever). The version might matter—in fact, look at that, the oldest few commits error on the current version of the DB.
Or you need to try something on another project that uses a different version of your database server. Or you need to pinch-hit on a project that has a half-dozen service dependencies at particular versions, none of which you have installed and running already.
The ability to very easily spin up clean installs of a bunch of services at arbitrary versions is incredibly useful. Containers aren't the only way to do that, but Docker does make it pretty damn convenient.
IO in the Mac "Docker Desktop" app (which wraps a Linux VM and a virtualization framework) is quite slow even on x86 so people often run eg PostgreSQL as a native app.
I have to hard disagree here. Plenty of Maven projects build are not immutable. They are buggy and not reliable when producing artifacts.
Also, Java 9 broke backward compatibility promise, and Java 17 broke it further by disabling a lot of default modules. Docker helps A/B testing multiple variation of JDK for our builds.
We definitely need Docker. It's a live saver for so many of our Java projects.
Not to mention, every single CI/CD engineers in a big company will mandate Docker as a packaging requirement anyway. So why not do it right and use Docker anyway?
> Also, Java 9 broke backward compatibility promise, and Java 17 broke it further by disabling a lot of default modules. Docker helps A/B testing multiple variation of JDK for our builds.
Does Java not have a version manager that lets you install multiple JDKs side-by-side and easily switch between them? For ecosystems I'm familiar with like Node.js and Rust this as is simple as a single command to install a given version and adding a text file with the required version to each repository(/directory) (it then gets used automatically when running code in that repo).
The answer for why not use Docker for development if you're using in production (on Windows/macOS) is that it's much slower. So the same reason that I have separate debug (fast compile time) and release (optimised) builds. If you're on Linux then Docker is great.
This almost always falls apart when you are developing two projects with conflicting stacks simultaneously. Can I run two JVMs with different versions and configurations? Multiple MySQL servers? Multiple web servers? And if I make a change to any one of them, how do I roll it out to my entire dev team? What would take endless tinkering becomes trivial with one Docker script.
Yes, Yes, Yes, often teams use shells script, gradle or maven.
The same config you'd put in the docker yaml or whatever would go into the maven or whatever config except that the devs would be familiar with the existing tools and wouldn't spend countless hours mucking about with docker files.
Also unless you're using docker on x86 AND linux, likely things will not work or at least you'll run into mem, or perf or other compat issues.
After working with Java/Python for years, I had forgotten about the hell people go through with other langs deploying on diverse OSes/arches.
Well my argument is the exact opposite – if I know Docker why would I spend countless hours individually mucking about with maven and jvm and whatever dozens of custom components the service uses that I might not even know of? And repeat that for every other stack.
If I need to test my code locally against a service some other team develops, the only exchange needs to be "here's a docker compose file you can run with one command", not a wiki with shell scripts and other dozens of instructions which were likely outdated 3 years ago.
I understand your point. The article is about Java dev from a Java dev's perspective so your assertion "if I know Docker" is often not true for this class.
As opposed to my assertion: "a Java dev knows one of {gradle,maven,shell} is almost always true.
Yeah, it's hilarious to see Docker so misused. Most Java/.Net environments will just work on any system without Docker. Adding Docker gives you a bunch of native platform and emulation pains you wouldn't have otherwise.
You don’t “need” docker at all. But it makes things easier. You can build any application as a self-contained executable or directory with executable and related files. But it turned out it was not enough.
Imagine keycloak. It's classical enterprise Java application with database. You can deploy it as WAR file with configured JDBC connection.
But people who use keycloak often have little idea what Java is. They write code in Go and call Keycloak interfaces via REST API. They just need to start that thing and connect to some database.
With docker they'll get it up and running in minutes.
That leaves the whole setup of the application server that the EAR file needs out of the picture though. That stuff is not specified declaritively anywhere in the EAR, so it's just a wildcard that can make your application work or not work depending on version and configuration.
Apparently those advocating stuff like Docker for OS agnostic languages never saw them in first place, probably busy in kindergarten and now pushing for Docker + WASM instead, 10 years later.
Yep. The last company I was at was a Java shop. We had no need for containers. We built a fat jar for each "service." A deploy was little more than copying that fat jar + configuration file over. Multiple services easily ran without Docker.
This approach was way lighter weight than pushing enormous images around.
To get the best performance, many Java applications, especially network-intensive ones, use native libraries. Netty is a perfect example, and is used by a lot of projects.
> Even worse, given that Docker and Kubernetes basically take Java/.NET application servers to other programming languages.
No they don’t. Not sure what you intended to convey here.
Many (basically all?) Java libraries that use native code have pre-compiled code for all platforms. They'll just work on all platforms.
We've been upgrading our dependencies for arm64 support; for the most part it is as simple as updating our pined version to a newer version of the jar. Sometimes the native code is in a separate jar so you just add it (opencv works this way).
OK. The next problem is, what if you have a bunch of different apps that require different JVMs, potentially at different versions, and you want to run them all on the same box? You can do this with JAVA_HOME, but running them in containers is a lot more convenient, and safer, because you can ship the runtime with the app. As a developer, you also don’t have to concern yourself with whether the target machine already has the correct JVM installed, and update it yourself if you like, without waiting on someone else to do it for you.
You also get the practical benefits of containers such as a convenient distribution mechanism that is language agnostic (a container registry) and abstracted management of network config like port bindings.
You’ll always be able to build a unique deployment solution for Java, python, or a native binary. But containers let you solve this problem the same way for every program.
If you think you know better, why don't you give everyone a useful, detailed solution, instead of providing curt, unsubstantive, and argumentative responses?
Per the HN Guidelines:
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
The technology is well established now. I've noticed a similar kind of solution in the Python community, swatting flies with sledgehammers becomes the norm because modern machines can often take it. Until they can't.
That's not a solution. Telling people to go look things up themselves in some unspecified place and devise their own solutions is pretty much a giant middle finger to this community (and is still against the HN rule against shallow dismissals). It also does nothing to persuade anyone that you actually know what you're talking about, as opposed to someone who just makes shallow criticisms to scoop up HN points.
It's not hard to imagine that using a big container solution designed for another operating system (and performance hit) is not the most elegant solution here. Especially when the alternative is an environment variable and zip file you already mentioned yourself. There is no step three.
I got the MacBook Pro M1 16GB when it was launched in November 2020. I was losing my previous provided laptop by leaving my previous job and needed to buy one to join a startup program. It was a big gamble but I am the kind of early adopter guy that did not want to miss out on this new CPU fun.
It meant for me that in the beginning IDEs, Java and things like node ran all in Rosetta mode and were very slow. I was almost starting to regret my choice. Sometimes there were Arm fixes committed for things like NodeJS but they were not yet released so I needed to build my own version. Same issues with the JVM to find something able to run smoothly. (Felt that the slowdown was extra painful for JIT like languages) Also for Docker it meant waiting for some months to have something that would work properly but luckily I could manage without at the start. I was happy to see that major blockages improved in some months and for some lagging dependencies I was pushing some fixes myself.
At this moment almost 1.5 years later everything works smoothly. The IDE, Java, Node etc. I still had to navigate around some specific dependencies but the fully native M1 development flow is so smooth compared to my previous Intel MacBook. I am quite happy.
Rosetta is fast unless any kind of JITed code is involved. A lot of Java development is happening with tools written in Java for the JVM which is relying a lot on JIT compilation.
The JetBrains IDEs were bordering unusable under Rosetta. They would speed up over time as Rosetta did its work, but it was multiple tens of minutes of slow as molasses until things got better and after each IDE restart you were back at square one.
Thankfully, JetBrains released updates to their products to run on a bundled ARM JVM very quickly
Very true! I received my MacBook on November 23rd 2020 and Jetbrains released an ARM optimised version of IntelliJ December 30th 2020. I may have used an EAP version some days/few weeks before release. https://blog.jetbrains.com/idea/2020/12/intellij-idea-2020-3...
I remember that first month as being very painful to use the IDE. It was really dramatically slow.
Discord was also pretty bad under Rosetta. Last I checked the Canary version was built for Apple Silicon and ran much better, though I'm not sure if that's made it to the stable release yet.
I remember last fall (when I got an M1 MBP for the first time, but the chip itself had been out for a year and sold through the school's laptop program) in a CS class I was the only one who could figure out how to make the JavaFX assignments work on M1. They would all load the GUI okay but crash as soon as anything was clicked.
The error message was from the native JDK code, so was not very useful, I just gave up and searched the bug tracker for "M1" until I found something that looked close. IIRC it was some weird error caused by code that was objectively wrong for several years, but the race/error condition had never been observed on an Intel machine, if I remember correctly. Thankfully it was in an EA build of OpenJDK, otherwise I probably would have given up and thrown up a VM in the cloud to run it in.
There is more nuance to it than this, but basically in x86 all memory writes are available to all cores via main memory, whereas with aarch64 they do not.
On x86, a write by core A to memory will be available to core B if core B reads from main memory.
On aarch64, a write by core A will not immediately get published to main memory (will likely stay in cache (L1, L2, etc.), so even if core B tries to read from main memory it won't see the value from core A.
Ultimately aarch64's "weak"(er) memory model is more efficient as the programmer/compiler can make more efficient memory accesses. This results in fewer cache invalidations between cores. The problem in practice is that tons of production code has been written which assumes the x86 memory model. It may also just be a concurrency bug which doesn't manifest on x86 but does on aarch64 like in the post.
Again, this is a simplification of what happens but I think it illustrates the difference to some degree.
That reminds me of an experience targeting x86 Windows and G3 Mac (IBM 750 PowerPC) with some C networking code (a thin client). Immediately I got a “bus error” on the Mac, even though it worked fine on the Penguin III. I found the problem was a misaligned memory access - a blatant mistake on my part - that the Penguin III just covered for somehow. You can read this as an example of the robustness principle, but I recall feeling I’d prefer the CPU just tell me something is wrong and not cover it up.
> All other development tools that we use on a daily basis either provide an arm64 build or the emulated x64 version works fine: IntelliJ IDEA, Visual Studio Code, Slack, Notion, Docker for Mac, Spotify, Firefox, Microsoft Teams, Postman.
I had a little chuckle at Spotify being in the list of developer tools. I feel the same. No techno, no code.
I mean, the study I remember (from when I was in some class 20 years ago or something, where we would have analyzed this sort of paper) showed that developers listening to music 1) completed the tasks at the same rate as the developers who were not listening to music, 2) reported the task being less boring than the developers who were not listening to music, and... 3) were much much less likely to notice that the code they had been asked to write was an elaborate maze of math that could be replaced with "return 0"; I thereby only listen to music while coding when I need to keep my morale up typing something I already pre-planned myself.
I wholeheartedly agree - I am working on a very technical new build at the moment and cannot listen to music when coding for it, but when I switch to maintaining stuff, I reach-out for my headphones.
Yeah I believe that could be true for the wider population, but the empirical evidence from my personal study into this shows that I’m completely incapable of any kind of extended concentration without entraining my neurons with repetitive beats. n = 1.
I find there's certain types of music I can and can't listen to if I want to be productive. If it's music I really like, and I'm humming along, or singing along to the lyrics in my head, I'm not gonna be able to focus on the work.
If it's more ambient music (like those "lo-fi beats to study to" livestreams that are always running on YouTube) I can usually let that run and be a replacement to the white noise coming from the fan in my room. But I still keep the volume pretty low.
For me there are gradients. Some tasks need a fair bit of concentration so I can't listen to music with vocals or a lot of dynamics. Some tasks need total concentration and I can't even listen to music at all.
From Brian Hook[0], who worked with John Carmack at id Software:
> I remember Carmack talking about productivity measurement. While working he would play a CD, and if he was not being productive, he'd pause the CD player. This meant any time someone came into his office to ask him a question or he checked email he'd pause the CD player. He'd then measure his output for the day by how many times he played the CD (or something like that -- maybe it was how far he got down into his CD stack). I distinctly remember him saying "So if I get up to go to the bathroom, I pause the player".
> You know what's pretty hardcore? Thinking that going to the bathroom is essentially the same as fucking off.
Was nobody else surprised that this article about Java development focuses so much on CPU architectures?
I realize it's mostly talking about testing infrastructure rather than Java code but it feels sad that we end up here - I remember Java's top selling point being "write once run anywhere" and I genuinely believed the JVM would shield you from most issues with CPU architectures. But it seems like they managed to sneak back in through the back door.
The JVM has worked fine on the M1 from almost day 1. I have workloads running on AWS Graviton without issue. I'm sure there was work getting the JVMs up and running on ARM initially, but as a user who builds on top of the JVM, they have worked great for me.
After skimming the article, it looks like it's mostly another long complaint about Docker which has been discussed many times at this point.
Yes, that's kind of what I meant by sneaking in through the backdoor. An article supposedly about Java development focuses on all the non-JVM dependencies. You don't need to build things in this way. Is this how everyone works these days? It surprised me that nobody else was commenting on this (edit: the comment from pjmlp is basically saying what I was thinking)
Before people thought Java would save everyone from ever having to think about native dependencies. Turned out not so much, although it works "OK" for desktop these days.
Now everyone thinks Docker is going to save everyone from having to think about native dependencies. In a few years everyone will realize that Docker only works like that when deploying from linux to linux on the same CPU architecture.
It seems that quite some dependencies like for networking, databases etc use natively build code internally using JNI. If those new targets like Mac arm64 are not added, those dependencies don't load. I can remember for example the Google Protobuf JVM package which uses the C++ written Protobuf library internally. It took quite some time before people inside Google had M1 machines and were building the JVM library too for M1.
It seems that those CPU architectures indeed sneak back through a back door... The same on Android, there you have also many libraries which need to be build and published for specific architectures. But luckily new architectures are not added regularly.
My issue is that our company uses Gitlab for CI builds and Gitlab doesn't have ARM runners. And I'm the only guy with Macbook, so using some Mac Mini for Gitlab CI runner is not possible. I'm rebuilding images for myself that I'm currently working on, but that's tedious and not very productive spending of time.
Another alternative that I'm currently considering is to rent some VPS in my city and use it as docker host. I'll be dependant on the Internet, so that's not very nice, but might be an option to consider.
I wish Apple would extend Rosetta to VM support. That's really missing piece of puzzle when it comes to migrating to ARM. qemu is not good enough.
I thought building an ARM64 docker image with qemu on a Ryzen 9590X would be a good way to offload building docker images from my Raspberry Pi 4. My benchmarks in building an ARM64 Nginx image was the following:
It's so weird to see that Java was architected to bend over backwards to be able to run on multiple CPU architectures, which was a feature nobody actually used for decades (well mobile devs did, but they were still targeting a homogenous arch), and now, when there's actually demand for this from the backend side, it doesn't work due to other components in the system.
I worked on an enterprise software system written in Java, our customers ran it on Solaris, Linux, AIX, and some even ran it on Windows, and developers on OS X. Same binaries, worked everywhere.
If you have a AWS (or Oracle Cloud has it too) presence, they have ARM VMs you can get. You could probably self-host a build agent on one of those to do your ARM builds.
We are running gitlab-runner on an M1 Mac Mini now for iOS builds. Runs fine, was a little complicated getting React Native/Fastlane to compile the app but eventually got it running and is creating new builds almost every day. And we are using the Scaleway M1 machine so we can easily do remote management.
I really wish CI companies would step up - they are the missing link now. I run ARM locally and my servers run it too, but I have to work around missing ARM CI step.
At least CircleCI has machine ARM runners but not docker ones.
Gitlab has ARM binaries for Gitlab runner. I can’t speak for the shared runners you get access to on Gitlab.com but you could always run your own runner and connect it to your Gitlab.
I'm a bit surprised nobody mentions the `--platform` argument that docker accepts to emulate a different architecture per container. It's a very smooth experience if you're reliant on 3rd party images.
I worked with cross-compiling containers. The compiling times were... bad. Our x86 build took like, two minutes (this was a very small and lean C++ application). The arm32v7 ones took upwards of 30 minutes.
Works the other way around as well: Compile platform independent code (such as Java) on --platform=$BUILDPLATFORM in a build stage and then copy into containers that are --platform=$TARGETPLATFORM. That way your build only runs once, natively, but you can produce the correct runtime containers for each architecture rather quickly.
Judging from the comments here and the article, it reads to me that perhaps staying on Intel Macs, waiting for the developer ecosystem to catch up and skipping the early M1 models (November 2020) was a very smart decision to make rather than spending months fighting with your tools.
Unless you want to wait 6 months or even a year to do any work reliably without these 'issues'. Even by then a more faster machine would be worth buying anyway.
Agree, waiting has paid off for late M1 adopters. The software ecosystem has smoothed out most of the hiccups.
And now there is the mac studio, M1 ultra, 128GB ram, 64 cores. An absolute beast of a machine. Fits in a backpack, so it's just as portable as a laptop assuming you work from docking stations anyway.
I have an M1 Max, with only 64GB and only 10 cores (the 64 cores in the Ultra are GPU cores, not CPU cores). For a software developer, it is probably fast enough, and I can use it anywhere. I build Rust, and build times aren't a problem. Certainly better than my i9 Intel MBP.
I love my MBA M1. It's the best computer I ever owned. Super fast for the kind of stuff I need it for. The battery seems to be always charged and it is immediately available when opening the lid.
And it doesn't even have a fan.
Best Mac ever.
I just bought an MBA M1 with 16 GB of RAM. I had to check to make sure that it was actually going into sleep mode because the system seems to instantly turn on and unlock when coming out of sleep mode.
It's also cold most of the time; it takes ages to get up to what feels like "room temperature". I'm besmitten.
As a Clojure developer, I'm very grateful for this information. Thank you! My development environment is OpenJDK plus a bunch of Docker containers (I use Docker extensively to contain/freeze various dumpster fires like projects that use npm and thus restore sanity to long-term development).
Docker's promise was never about processor architecture. In most cases, I would say developers have been working with Docker locally on a different processor architecture than what they deploy to in production. Their local machine might not have AVX-512 instructions, but the production machine might... or any number of the small variations of AMD64 that exist. If you're not careful, you can end up compiling binaries that work on your machine, but don't work in production, all on AMD64.
These days, Docker supports multiarch images, so it's fairly trivial to build one image that supports with AMD64 and ARM64 transparently. CI tools like CircleCI support runners for AMD64 and ARM64, so you can even run your test suite on both architectures for additional confidence, if needed.
For me, Docker containers have always been about reproducibility of builds (which some people will argue about, but it does a good job 99% of the time) and consistency in deployment. You don't need to have an artisanal deployment methodology for each application... you just need to have a way to deploy docker containers. Even for single static binaries like Go projects often produce, wrapping them in a docker container just makes it easier to abstract away the deployment problems across projects. For more difficult to deploy languages like Python, you get similar benefits to having a single static binary by wrapping all the dependencies up into a neat little container.
Plus, once you have a standard unit of deployment like a docker container, you gain access to the broader ecosystem of container tools with minimal effort, such as running each container within a Firecracker microVM if you need isolation.
I've been enjoying my 2020 m1 as my local node, spring boot and some jupyter playground. It's a wonderful device. I've noticed there's a lot of momentum to supply m1 related fixes, for example, I think a kafka admin tool I was using just provided an update for it, so things are moving along.
I think around next year I might considering asking for an m1 upgrade for my aging mbpro.
I have had an M1 MacBook Pro for over a year and it did take extra work building SBCL Common Lisp from scratch, setting up brew for M1 architecture, and experimenting what would work for me on Docker.
M1 Macs are awesome, I love mine, but I understand devs who want to stick with Intel.
One of the reasons M1 is easy for me is that I do a lot of dev using mosh/ssh, tmux, Emacs on Intel VPSs.
Too funny, this. (I still consider Java, and Javascript along with it, a new kid on the block. The language is still evolving at a rapid pace - unlike C, for example - with the kinks still being worked out.)
At the last company we had Docker, Java 8, Wildfly 10, Gradle 4 and I didn't manage to make it run locally on M1.
I was pleasently surprised with how smooth it was to connect to Ubuntu VM with Jetbrains Gateway. You use native Jetbrains app on your computer (Intellij Idea in my case), but everything is executed and compiled on the remote machine (Ubuntu VM). Another huge upside is that Docker is running on Ubuntu, which is a lot faster than on OSX. Downside is that your are dependant on the Internet ofcourse.
Officially the product is still in Beta, but it worked good enough for me.
[1] https://www.jetbrains.com/remote-development/gateway/