Hacker News new | past | comments | ask | show | jobs | submit login
Tell HN: Travis CI is seemingly compromised (once again)
160 points by spondyl on Dec 8, 2022 | hide | past | favorite | 53 comments
A number of Travis CI users appear to have had Travis CI tokens revoked by Github in response to suspicious activity surrounding token.

Travis themselves have still not issued any notice or acknowledged this incident so it's worth letting the community know if they weren't already aware.

From memory, this will be the second breach in 2022 (https://blog.aquasec.com/travis-ci-security) in addition to last year's secret exposure (https://arstechnica.com/information-technology/2021/09/travi...)

---

A sampling of users on Twitter who have run into this issue:

https://twitter.com/peter_szilagyi/status/160059327410805555...

https://twitter.com/yaqwsx_cz/status/1600599797118996491

https://twitter.com/samonchain/status/1600611567606775808

https://twitter.com/dzarda_cz/status/1600613369408634886

https://twitter.com/samonchain/status/1600611567606775808

---

An example notice being sent out by Github (in lieu of Travis themselves taking any action):

> Hi {username}

> We're writing to let you know that we observed suspicious activity that suggests a threat actor used a Personal Access Token (PAT) associated with your account to access private repository metadata.

> Out of an abundance of caution, we reset your account password and revoked all of your Personal Access Tokens (classic), OAuth App tokens, and GitHub App tokens to protect your account, {username}.




Not surprising after they got rid of so many devs, reminds me of another company who did that recently, I wonder how long we will have to wait for that to start having issues.


I’m sure any day now. Better hold your breath!


Uh, even in worst case it's gonna take months before it'll become an issue.

Firing everyone that knows the platform works pretty well short term, otherwise the outsourcing industry wouldn't be so popular

It's just highly unlikely to be reversible at this point, so once issues pop up it's basically goodbye.


I think it’s fair to say it’s been surprising to many of us that it hasn’t had any major public issues yet. I speculate that changing it, adding features, upgrading deps, is now slow and difficult as all hands are probably busy keeping it just functional.


I think it's a good signal for microservices architecture - at least at this scale. When things have broken, it's only been one piece of the platform for me, rather than the site breaking entirely.

Things I've seen broken over the past week:

* Notification counts - this is replaced by an empty blue dot

* Media - text-only tweets and replies work fine, no profile pictures though

* Replies - root tweets load fine and show an error if you try to open replies. If you try again enough times they would usually load.


I had enough "Something went wrong, try reloading" errors in a row two days ago that I gave up trying to see replies.


It doesn’t seem surprising to me that a big corporation is (was?) overstaffed.


No one, not even a super rich “genius” can come in to a company, having done no due diligence, and within 2 weeks correctly and rapidly identify the over half of staff that are in fact redundant and not critical to operations. Add to that the cultural problems causing mass resignations, and even if you are 100% correct that it was over staffed, then it’s clear that this was not the correct way to handle it, and even the smallest modicum of common sense would clearly show that.


Everyone knows you need to farm that out to a consulting firm who does it based on numbers in a spreadsheet instead.


Elon did, does that upset you? You say "no one can" but he did. The platform is not on fire, it hasn't been down for any relevant amount of time. Even if it were, it's twitter, not critical infrastructures. Some people won't be able to post hot takes and snarky retorts. Oh my.


Keeping steady state is not impressive. But we’ll see when another log4j happens, and everyone’s phone numbers and other personal information is leaked.


Didn't that already happen to Twitter? https://youtu.be/p3ZuYttJkL4


The other company got rid of freeloaders, so it could afford devs, and it's already better off for it.


I hope it has someone wealthy backing it, or access to software engineers capable of untangling the truly astonishing complexity of the product. I wouldn’t want to see a company fail because top Silicon Valley talent would never compromise ethically for high compensation if they needed to bring on developers in a pinch.


I’m surprised it’s still around. Since GitHub released Actions and Travis abandoned the freemium, there weren’t many reasons to stay


I can't believe it took me so long to formally flip the bozo bit on Travis CI by instructing my RSS feed reader never to show me anything about it any more.


arm64 is one reason.


Set a GitHub runner on one of those free Oracle Cloud ARM64 instances.


Sure that is doable, but it doesn't quite sound like a OOB solution where you just add a flag to your YAML, or another line to your build-matrix and call it a day.

I've honestly missed this on Github Actions, and hadn't (yet) found a simple enough workaround to bother.


But it is also set and forget kind of thing.


If it’s that easy… git a link to a quick howto? :)

Does it work regular (free) accounts, free organizations or other similar setups used by non-commercial open-source projects?


https://docs.github.com/en/actions/hosting-your-own-runners/...

You can find a guide on getting an ARM64 instance in Oracle Cloud elsewhere.


Travis the company had an exit a few years ago, the general feeling since is that the product isn't really maintained anymore. If your needs are simple, GitHub Actions works well, if you need features like insane parallelization, use Circle.


Github Actions supports heavy parallelization and fan-in/fan-out jobs just like CircleCI does, so I'm curious if there's some limitations that I haven't ran into yet. I'd go farther to say that their documentation is much richer and easier to search for.


Last time I didn't use Woodpecker for my projects/projects I was involved in (so maybe one year ago last time?), GitHub Actions were a bit harder to debug as you don't get ssh access as you get with CircleCI. Being able to just ssh into the builder/worker when something goes wrong, cuts down debugging time so much.


I've been using act[1] as a tool to develop/test actions locally, which has helped a lot for creating new actions and debugging existing ones without incurring additional build costs.

There does appear to be some solutions on the marketplace that will allow you to ssh into a runner, but I haven't had a need for them as of yet.

[1] https://github.com/nektos/act


If folks do need to SSH into a runner to get over some humps, come and talk to me. No point in waiting to get to that 5 or 25 minute mark in a job just to change one line and have to wait again.

https://www.youtube.com/watch?v=l9VuQZ4a5pc

https://twitter.com/alexellisuk/status/1577977820055310336


If folks need to SSH into a runner to debug stuff, don't talk to me nor anyone else, just chose a CI that allow you to do that without any extras nor 3rd party stuff :) CircleCI is great for that, but also Woodpecker CI ships a "run locally" command with their CLI as well, and as a huge plus, Woodpecker is 100% open source, compared to most other stuff.


The hosted Actions Runners are super convenient. I use them a lot for OSS.

For private repos and work stuff, we are working on actuated, which comes with unmetered billing minutes because you BOY machines or cloud instances.

ARM builds show to be ~ 20-30x faster than using QEMU too.

https://docs.actuated.dev/

It was up on Hacker News recently, so you may have already seen it.

https://blog.alexellis.io/blazing-fast-ci-with-microvms/


I couldn't find the parallelization limits easily with GitHub. Also GA lets you build many things, one of which is CI/CD - the docs, last time I looked, suffer as a result of being less focused on CI/CD. You've obviously had a different experience. FWIW I'm currently using Actions for Portal and it's been fine, but my needs are smaller on this project compared to previously.


Github audit log is unusable when trying to figure out what the "suspicious activity" is. For the repo category only the actions which change something are logged. At least for the enterprise plan I would like to see the audit log more like the AWS CloudTrail. Just log all the API calls.


And maybe highlight some? Github's internal systems already triggered on something, so why not (at least generally, to preserve method) indicate that to a user?


Yup! Corp comm has failed here. Not issuing any additional statement and not communication with the (paying) customers about the (for customers potentially damaging) actions taken. This just erodes the corp image and the customers trust.


Devils Advocate: depending on level of access an attacker has, that info could be used to more carefully hide surreptitious actions.


This would be plain security by obscurity which is the worst kind of security.


I hate that trend in modern services.

They just decide something's wrong with your account but don't tell user what, or why it was decided.


Their fancy schmancy machine learning technique probably can't articulate


In theory it's to reduce information for bad actors, but we know in practice it never really works this way.

Don't send people on a goose chase because you're obscuring details for "security."


I remember using Travis for everything (although on their free tier).

After they did organizational changes, and some other stuff, I noticed my builds were hardly running. Many never started, and ever more never completed.

Travis was causing more issues than it solved for my projects' Github PRs.

I'm honestly surprised to see people still using Travis now, a few years down the line.


Bug bounty people (myself included, though mine's quite aged) have written scrapers on all the main popular CI/CD platforms, to automagically scrape tokens from logs & submit bug reports to get paid. Unsurprising if malicious actors have done the same.


I might get a ton of hate here. But just use jenkins [if you cannot use modern tools like GHA]. Put it on a port 8085 for example, go to your cloud providers security rules, block all requests to port 8085 that are not using a VPN. Sleep like a baby at night. The end.


People use TravisCI still? GitHub actions, Circle, AWS Pipelines...There was so many better options


Same was said about CircleCI yesterday.


Interesting that all the affected are somehow related to cryptocurrency.

edit: ah no, only some


Apply the Willie Sutton principle (https://en.wikipedia.org/wiki/Willie_Sutton)?

"Because that's where the money is" frequently works, even if the quote is apocryphal.


As a long-time Jenkins user who never hopped the Travis/Circle bandwagon, is there any reason to consider these in 2023?


I used to be Jenkins user. Now I'm all in on GitLab and it's got some cool CI things.

Migrating was tedious, not difficult. We went slow, took a while and needed little scaffolding


Thanks!

I played with a self-hosted instance a bit (we need to store all code on premises) and it looked like several interesting CI/CD options were marked as "only in Ultimate". Is it worth it?


I think so, I'm not using Ultimate tho, and the team is <10.

I mean, it "just works" and has all these critical pieces tightly integrated. I'm hyped on it.


I saw this happen in a couple of orgs I’m in that don’t use Travis, at least to my knowledge


If a team member uses a personal token with Travis and the personal token can access private org repos, there’s a chance this can trigger.


Absolutely. I hadn’t seen anything yet that was a smoking gun pointing at Travis. While it’s a likely candidate I wanted to voice that it might be another vector


The whole travis-ci.org vs travis-ci.com thing was too confusing




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: