> “The breach was massive, customer data was at risk, access to customers’ devices deployed in corporations and homes around the world was at risk.”
> “They were able to get cryptographic secrets for single sign-on cookies and remote access, full source code control contents, and signing keys exfiltration,”
Maybe putting your network control plane in 'the cloud' isn't such a good idea after all...
Edit: Just re-read the article, this part stood out:
> the attacker(s) had access to privileged credentials that were previously stored in the LastPass account of a Ubiquiti IT employee, and gained root administrator access to all Ubiquiti AWS accounts, including all S3 data buckets, all application logs, all databases, all user database credentials, and secrets required to forge single sign-on (SSO) cookies.
> Adam says Ubiquiti’s security team picked up signals in late December 2020 that someone with administrative access had set up several Linux virtual machines that weren’t accounted for.
If this is true, and whoever breached them had full access to their AWS account, can we really trust them to clean up all their tokens and fully eradicate all forms of persistence the hackers may have gotten?
Was shopping for alternatives to my Ubiquiti last night. Seems like there is nothing good out there. Engenius has shit hardware and a cloud controller. Aruba has a cloud controller AND you have to pay for a license. Cisco makes you pay for a license. TP-Link is cloud-based.
WTF. Does anyone have a decent WAP where I can use PoE, deploy like 5 of them and have them support roaming between APs, all managed locally? Is that too much to ask?
Disclaimer: worked for Meraki (now Cisco Meraki) for several years.
Generally, halfway decent wireless APs are all targeted at the enterprise market. Consumer hardware is a brutal race to the bottom, as lay consumers aren't qualified to compare options based on anything but price and UI. Ubiquiti was an outlier in trying to bring enterprise features to the consumer market
The problem for enthusiasts and small business/home office setups like yours are that both the enterprise market (e.g. Meraki) and the premium consumer market (e.g. Google WiFi) focus heavily on ease of management - cloud controllers are table stakes these days, not a controversial feature. Part of that premium that Meraki, Aruba, and that class of enterprise supplier charge is about having a trustworthy and secured backend.
Note, however, that roaming between APs is a feature of the 802.11 standard; you just need to have all your APs on the same layer 2 (802.x) network, and using the same SSID and credentials. No fancy hardware required, and you can even mix and match vendors.
My personal experience with Meraki has been the very definition of vendor lock-in.
The security appliance was relatively cheap, then we saw the fine print that the total bandwidth was artificially limited and increased only adaquetly two product levels up. Sorry Mr BubbleTime, you need to buy a new applicance and a new license. Your old one is worth nothing and non-transferable, watch it rot.
The switches seem absurdly expensive when you consider the 5-7 year licensing costs. And the quality is poor at best considering Meraki went and pushed a firmware update that bricked every fan in every 48 port switch we had. But you have the security appliance so it “only makes sense” to pay for these switches.
We had an IPSEC incompatibility between a vendor with an ASA and our Meraki gear. The solution was to buy a Cisco device just for that one connection.
All in all, it’s passable, but because of the lock-in it’s not like I have a cost effective choice to get away from it. I wouldn’t chose it again.
That said, it does offer a mediocre IT tech a single pane of glass they have to try to mess up.
Of all the Meraki factors I’ve learned and considered, that it is cloud-based is the least important towards my recommendation or lack of. There are lots of people that would be happy to explain all the ways my experience is wrong, but whatever.
Completely agree with the lock-in, and they aren't the best / featureful device out there. It seems the sweet spot for them is places with LARGE distributed footprints (such as retailers), where you can have very simple networking (some back to HQ, the rest to internet).
It fits well with being able to rapidly bring bodies into a project and implement change X across hundreds of stores, while having a standing IT team of 5.
If you have onsite (fulltime) IT, its likely not the best option.
Is there a community for this kind of discussion at this point? When I was an admin, and then later working in networking in the 2000s, there were tons of very active mailing lists, not just for hardcore networking but for IT-oriented stuff, mostly all faded to a shadow of their former selves.
I'd be particularly interested in comparisons of Meraki/Mist/etc. for small enterprise and campus.
Some of the relevant subreddits have decent discussions from time to time. The grandfather is /r/networking, but if you look at its sidebar, there's a long list of other subreddits for more specific subjects and individual brands. Stick to the subs for professionals rather than minor home network issues and you'll find quite a few knowledgeable people and plenty of anecdotes both good and bad about different brands etc.
"Cloud-based" is the implementation; the killer feature is the single pane of glass. It's just hard to implement that without putting a bunch of logic in the cloud.
Last I worked at Meraki was 2015; I don't remember any artificial limiting of bandwidth at that time.
"Cloud-based" is the implementation; the killer feature is the single pane of glass. It's just hard to implement that without putting a bunch of logic in the cloud.
Hard in what way? As long as the control traffic has paths between all relevant devices over the management LAN, why does the cloud need to be used at all?
1. Putting the management UI on a local system requires some custom networking setup, and is full of security footguns.
2. Most customers who want this have multi-site setups; in that case, you need paths across the public internet too. Again security footguns, and also reliability ones.
3. Remote work is very very common for IT people.
4. Recovery from configuration mess-ups is harder if your control plane has to run on the same network that you've messed up.
There are on-site controllers available. They've just lost out in the market because of the amount of in-house IT expertise they require. No one wants to deal with that shit, and outsourcing the security and reliability problems to a specialized third party is usually a good idea.
This looks like an enterprise perspective. For smaller organisations operating on a single site, some of these concerns won't apply. I also think you're being a little one-sided there because cloud-hosted configuration has its own risks in terms of security and accidentally cutting off your management access, many of them directly analogous to the ones you mentioned, plus you have all the usual concerns about any critical system that depends on Internet connectivity to work properly. At the end of the day, nothing is more reliable than local wired networking, and nothing is more flexible for disaster recovery than having someone physically on-site.
In the prosumer to small business segment, I would argue that there is still enormous potential value in being able to configure all of the network gear from a single GUI, not least because it doesn't then require a lot of in-house networking expertise to get something going that works and is reasonably secure.
> also think you're being a little one-sided there because cloud-hosted configuration has its own risks in terms of security and accidentally cutting off your management access, many of them directly analogous to the ones you mentioned,
But with a cloud-managed system you have a professional, single-purpose organization dealing with those challenges. Which you are getting for the rock-bottom price of your licensing/support plan. Building a good internal IT organization is hard and expensive, and most businesses have other things to do.
> plus you have all the usual concerns about any critical system that depends on Internet connectivity to work properly.
Generally these systems only need internet connectivity to change the configuration and for some monitoring features. In practice, customers are okay with these being unavailable during internet outages as long as both the management platform and the ISP are on a pretty strict SLA.
(Compare, for example, the usual downtime from your 1-4-person IT team not having someone with the right skills on call.)
> and nothing is more flexible for disaster recovery than having someone physically on-site.
Who has the cash for that?
> In the prosumer to small business segment, I would argue that there is still enormous potential value in being able to configure all of the network gear from a single GUI, not least because it doesn't then require a lot of in-house networking expertise to get something going that works and is reasonably secure.
That was my original point: "Generally, halfway decent wireless APs are all targeted at the enterprise market. Consumer hardware is a brutal race to the bottom, as lay consumers aren't qualified to compare options based on anything but price and UI. Ubiquiti was an outlier in trying to bring enterprise features to the consumer market"
I don't know what your standard for a 10-to-50-employee small business is, but "point your browser at this IP address" is usually beyond their in-house technical skills [1]. Small businesses whose core competence is software/networking, or who by coincidence have that expertise in-house, are a tiny niche market. No one [2] cares.
[1] See for example the rise of the Managed Service Provider, which was a large and growing subsegment for Meraki back in 2015 or so. Showing up, installing the hardware, setting up the wireless, and then managing it from your office a few miles away is a big business opportunity, and is a much more efficient use of limited skilled IT labor.
[2] No one with substantial resources and a profit motive.
OK, with tongue firmly in cheek, I will try to reply to your points from the perspective of the small organisations I was talking about.
But with a cloud-managed system you have a professional, single-purpose organization dealing with those challenges.
Just to be clear, are you thinking of the professional, single-purpose organization we've been discussing today in the context of a catastrophic data breach, the one we've been discussing in the context of incompatibilities with other vendors, lock-in effects and expensive licensing, or a different one?
Generally these systems only need internet connectivity to change the configuration and for some monitoring features
So as long as the equipment is set up exactly how we need it and never needs to change or be checked for any reason, everything is good. It's hard to imagine why these devices need a UI at all, when the engineer who installs the equipment could just set it up once and then you're done.
In practice, customers are okay with these being unavailable during internet outages as long as both the management platform and the ISP are on a pretty strict SLA.
John: Bob, the Internet is out again. Who do I call at the ISP?
Bob: We don't have a dedicated contact, it's just the business support number on their website.
John: I'm in the queue, at number 17. What's our maximum time for someone from the ISP to contact us about an outage? That might be faster.
Bob: No-one will call, but if it's not back by next business day we do get £50 off next month's bill.
(This is roughly how that conversation probably goes when you're a 20-person organisation with two floor of an office building on a business park outside a small town.)
(Compare, for example, the usual downtime from your 1-4-person IT team not having someone with the right skills on call.)
What's an IT team?
Who has the cash for that?
What cash? When we have a new starter, John or Bob sets up the WiFi on their laptop and company phone and adds those MAC addresses to the whitelist for the network. Normally John works in development and Bob works in sales, but they do know a bit about networks so this is fine. Well, as long as they can get to the GUI, anyway.
Small businesses whose core competence is software/networking, or who by coincidence have that expertise in-house, are a tiny niche market. No one [2] cares.
And yet as someone who has worked for software development businesses for an entire career and whose customers/clients have mostly been other relatively small organisations of one type or another, I have never met one that didn't. Of course that could be because I've tended to work with other technically-inclined businesses, but the same is true even for schools or my own business's accountants. I'm not claiming this is some sort of universal truth, but I don't think the market is nearly as tiny as you're suggesting, at least not in this part of the world (the UK).
Remember, we're probably not talking about setting up encrypted WAN tunnels across continents and multiple layers of switches in a data centre here. We're more likely to be talking about getting an Internet connection with suitable firewall set up, connecting a handful of switches and APs and making sure everyone knows the WiFi password, and installing everyday software on the staff PCs and mobile devices with maybe some basic configuration and enabling updates.
[1] See for example the rise of the Managed Service Provider, which was a large and growing subsegment for Meraki back in 2015 or so. Showing up, installing the hardware, setting up the wireless, and then managing it from your office a few miles away is a big business opportunity, and is a much more efficient use of limited skilled IT labor.
They're not unheard-of here, but again, in my experience such arrangements are far less common in smaller organisations than just having a couple of people on the staff who also "set up the IT" and know enough for the kinds of everyday admin tasks you're talking about.
> What cash? When we have a new starter, John or Bob sets up the WiFi on their laptop and company phone and adds those MAC addresses to the whitelist for the network. Normally John works in development and Bob works in sales, but they do know a bit about networks so this is fine. Well, as long as they can get to the GUI, anyway.
"Small businesses whose core competence is software/networking, or who by coincidence have that expertise in-house, are a tiny niche market."
You have that expertise in house. Having looked at sales numbers and market research for a company that sold internationally and cross-industry: yes, your experience is very unrepresentative.
> even for schools...
Tangent: schools are honestly pretty technically sophisticated! We sold to some of them at Meraki, but they were drawn to us more for labor savings than to compensate for limited expertise. Education customers typically had very few (especially in perpetually-underfunded US primary and secondary schools), but very competent, IT people. They were feature-hungry power users.
In part that's because, even with low employee headcount, they have to provide a surprising level of IT services per student as well. A school with 80 employees and 1000 students probably has the IT workload of a white-collar employer with 500+ headcount.
You have that expertise in house. Having looked at sales numbers and market research for a company that sold internationally and cross-industry: yes, your experience is very unrepresentative.
OK, let's assume that's true for the sake of discussion. According to your market research and sales numbers, what is the big market for these cloud-managed products among smaller organisations, and how do those organisations generally manage their IT facilities?
1. Use low-cost consumer hardware with zero centralized management, and set it up with the same expertise and judgment as your typical residential deployment.
2. Have one admin person with the wherewithal work with web UIs, and wants a simple setup-and-forget system. UI not much more complicated than a single-AP residential deployment, user management workflow no more complicated than adding a G-Suite user. If they can use the default password for the admin system, they will (which e.g. Meraki and Aruba don't have in any meaningful sense).
OK, so let's look at the second of those, since the first is consumer level and not really our target market for professional grade networking equipment.
Your original contention was that it's hard to implement a single pane UI without putting a bunch of logic in the cloud. If our hypothetical one admin person with some idea of what they're doing, together with any automatic assistance the relevant devices provide, can set up enough local networking that all of those devices can reliably access the Internet and support cloud-based configuration, then a similar process can set up those devices to support single pane configuration using the LAN only.
At that point, looking back to the four "hard problems" you enumerated a few comments ago, I still don't see a strong argument for needing the cloud dependency.
The risks around network setup and reliability don't seem any worse for LAN-based configuration than cloud-based. In fact, LAN-based clearly has an advantage by not relying on any external infrastructure. It also has the advantage that if you want to get more serious for a larger deployment, you can run independent cabling and create a dedicated management network for control signalling, while most places aren't going to have an independent second Internet connection for management traffic if you accidentally break your configuration so your main data network loses Internet access.
Managing multiple sites is probably a non-issue at this level of the market.
Remote access for IT/support people is easily provided if necessary by having safe and easy VPN setup as part of your user-friendly interface. This has the added advantage that your tech people can also reach any other parts of the network they need, and so you might have required this functionality anyway. And if it's locally configured, you can always quickly shut that VPN access off again in case of any security worries, without needing anyone else's remote systems to be working properly before you can secure your own in an emergency.
In actual deployments and support situations I saw at Meraki, connectivity from individual hosts to the internet was usually the most reliable part of the network.
At this point, it feels like the reasons to use or not use Cisco for networking are much the same as the reasons to use or not use Oracle for databases. I'm not sure it has much to do with the technology in either case any more.
> Note, however, that roaming between APs is a feature of the 802.11 standard;
In theory yes, but man do a lot of devices have terrible roaming heuristics.
"I can still see beacons so id better stay here even though i havent received a packet in the last minute. Wouldnt want to pay the time cost of associating with that other BSS that has 5X the signal"
The key issue is the protocol seems to have no ability to associate with multiple BSS's together.
It's so nearly there. The power management stuff means that even with single a physical radio one can associate with multiple BSS's on different frequencies by telling one BSS to hold packets for you while tuning in to the other frequency.
All that's needed to make it reality is a way to tell a BSS "If I fail to ACK a link layer packet, please forward it via the wired network to this other BSS to send to me instead".
Then a client could be connected to multiple BSS's, send packets via either, receive packets via whichever one it is currently tuned into, and not lose any packets while switching.
You can fix this on the AP side with minimum RSSI or data rate control. But that would probably push you over to either Ubiquiti (and the similar “cloud based” options) or the enterprise market to get those features, unfortunately.
Have you tried setting your transmit power low (just enough to get good signal to the places intended, but definitely no more than your devices can trasmit) and increasing the minimum send rate to something reasonable (say 10-40 Mbps, beacons use minimum rate)?
It should help high power bad signal (some devices use fixed thresholds) and equalize the beacon vs. data reception quality.
I don't think openwrt had data rate config in webui, but it does support the setting in the config files (that I normally scp onto a device). The following seems to work:
/etc/config/wireless:
config wifi-device 'radio0'
...
option txpower '1' << 1mW (more than enough for 1 room)
option legacy_rates '0'
list basic_rate '24000 36000 48000 54000'
list supported_rates '24000 36000 48000 54000'
This messes with your AP placements though. Depending on your AP placements, you may or may end up with deadspots. You need to be sure that your AP placement is sufficient when taking this strategy. And yes, I take this strategy too.
I went through this when setting up wlan in a new office some years ago, looked at roaming APs etc.. finally I just bought 4 consumer Asus routers on the same SSID, worked fine for all our purposes at least.
Do people _really_ need wifi roaming in their homes?
I have multiple cheap APs setup in my house using the same SSID and it's fine. As long as I'm not holding a realtime conversation and moving around between APs I never have any problems. And since I almost never hold a Skype call while walking through my house I almost never have any issues.
You don't have stone walls. And you haven't spent the last year working in a study that's located between two APs, where clients flip now and again and Zoom would tear down the connection.
Of course you could say: Does the house have to be designed that way? Do the APs have to be located where they are, is it really necessary to have that stone wall, is it necessary to put the study in the place where it is, is it necessary to have that noise insulation around the elevator? None of that is necessary, but some Mikrotik hardware was much cheaper than getting rid of a stone wall and more pleasant than having to hear it when the neighbours use the elevators.
Yeah I have a 2' thick stone wall in the centre of my house (old exterior wall). I have an AP on either side of it as they penetrate the ceilings/floors above fine, but nothing is getting through that wall and maintaing good signal.
Just brick that's old enough will do it. Mine's something like 150 years old, and it's absolute murder to drill into, just incredibly hard, and it's either dense enough to act like stone, or it's absorbed enough moisture over the years to look like a faraday cage to wifi.
Yep, stupid L-shaped house where the inner curve is a damn Faraday Cage. NOTHING goes through.
If I'm in the living room and need to move to the other end of the house to get away from family-related noise, the device needs to roam between two APs.
Even if you don't "need" roaming having more coverage lets you dial down the power on all of your APs, so you can get much closer to the theoretical maximum throughput.
4 floors, 150-year-old brick, random steel girders in annoying places, and a broadband line that comes into the building at almost the least convenient place possible. Yeah, I need roaming.
> Note, however, that roaming between APs is a feature of the 802.11 standard; you just need to have all your APs on the same layer 2 (802.x) network, and using the same SSID and credentials. No fancy hardware required, and you can even mix and match vendors.
Not exactly. There are extensions to pre-authenticate with an AP (802.11r) for truly seamless roaming without packet drop or delay and for AP controlled roaming (802.11k) where the current AP tells you your options to roam to. This last one is important because the AP has generally better information about the network than the client and because the clients are not that great at managing this.
I am sure there are other extensions too, but afaik cheap APs don't implement these.
The base standard's behavior requires a reassociation to the new cell (i.e. AP i.e. BSSID). This introduces a gap in coverage, but for simple setups like the 5-AP one IgorPortola is talking - I assumed that this was using shared-password auth - the gap's length is functionally 0. 802.11r gets rid of that gap, which is important when using heavier-weight authentication protocols like 802.1x.
(Note that by 802.x in my original I meant not 802.1x, but rather the set of standards including 802.3 (ethernet) and 802.11 (wifi))
Ubiquiti had a secured backend - their screw-up was not doing MFA on their admin accounts. I would still like if there was an option for a local-only control panel.
If admin login is using weak credentials, it is by definition not a secure backend. Password/credential management and mandatory MFA are ALWAYS part of security due diligence for suppliers.
There are way to limit the scope of those. One set of credentials per environment for example. You can also limit the use of the these credentials by policy.
The cloud controller is a (surprisingly heavyweight) service that manages a network of unifi devices. It can run on a raspberry pi, or an x86 container / vm.
If I wanted to run it all the time, I’d try putting it in a docker container on my synology.
Instead, I have an sd card for my raspberry pi that has nothing but the controller installed. The main downsides to this are that it is easy to lose the sd card, and that the controller gathers bandwidth/usage/wifi connection reliability stats, but only when it is running. I don’t get those unless I boot up the RPi to diagnose some network issue (this has never been an issue in practice).
One advantage of the RPi setup over a synology container is that it has both a ethernet jack and a wifi adaptor. This is surprisingly helpful when bootstrapping complicated mesh topologies.
I have a UDMpro which self-hosts a controller, thou personally if i knew it couldn't be joined to another controller i'd have gotten something else so i could throw it in docker (which runs on a NUC with the storage off a synology)
Gosh. I wish I knew. This thread is rife with alternatives, so other's guess is as good as mine. The unifi wifis I have running are still good and work extremely well. So my suggestion is to keep using them, but only if you host the controller software on your own hardware (I'm using RPi 4 as stated) and only if you avoid their cloud solution(s). (This IMO).
I am still looking for alternatives when the time comes to replace mine. Which I'll be forced to replace once/if they completely nerf the self hosted on self hardware options.
The Ubiquiti controller is not needed for general operation, unless you're using a guest hotspot. Otherwise if it's offline you just lose ability to do configuration and it's data/stats logging.
Hah, that's a dream world where enabling/disabling SSID's ever worked properly.
They have a good UI, good hardware but the software seems half baked.
Originally with the switch to the "new settings", the schedules were switched between the AP's and the UDM, not sure about a dedicated cloud controller.
Still lots of pitfalls with just MFA. Text/email being the worst and TOTP being somewhat better but not great. A lot of password vaults support storing the TOTP secret so they can generate time based codes which seems reasonable when the vault is 2-3 factor protected (some do IP heuristics, passwords, tokens, PINs, etc). Unfortunately if someone gets access to the vault in it's unencrypted state you're in for a world of hurt.
Even with hardware tokens, if someone gets access to your machine while you're using it they can wait til you authenticate then use the creds proxying requests through your machine so they look legit
I run a local controller with no remote access for unifi - i would never use any networking hardware that needed a cloud controller/connection for breaches exactly like this.
Wow this is great and seems like a direct competitor to UniFi. Few years back when I was researching meraki I found it way too pricey for small business over UniFi but this makes much more sense now.
With standard 802.11 roaming, you have to reassociate and reauthenticate to the new AP. While this process is underway, you can't pass any traffic. For open networks or simple auth schemes like WPA2 single-password, this isn't very noticeable; however, for heavier-weight auth schemes like 802.1x this pause is substantial and is especially noticeable on voice/video calls. 802.11r is a scheme for caching the authentication info, letting you avoid the 802.1x round-trip to a central auth server.
For a 5-AP network, usually with shared-password WPA2, it's not necessary.
Yes, roaming by sharing SSID and passcode is a world of pain. 802.11r solves all those pains, I've been using it on OpenWRT for months without a glitch.
Yes, it's why I use 802.11r. It works with most devices, although the one which does not support it makes me laugh. Nintendo Switch will not switch from one AP to another. It holds on, tooth and nail, to whichever BSSID it used when it first connected.
My kids have to go into settings, reconnect, and move on.
I have a couple of AP AC Lites running openwrt and 802.11r, works fine except on Xiaomi phones apparently...
I never tried the unifi though, flashed openwrt within 15 minutes of receiving the APs
Pretty much that. It's also very simple nowadays. you just tick the box on the Wireless Security tab, and check that the mobility domain match between all the APs - it should by default, I think it's derived from the SSID.
Be aware that there might be compatibility issues. I enabled it on a pair of OpenWRT-running APs, and the handoff worked fine for my laptop, but my phone would claim to be successfully associated/authenticated with the new AP, but traffic wouldn't flow. Turning off 802.11r fixed the issue completely, and it turns out I don't really need it after all, as my devices seem to roam properly and the reauth is pretty quick.
We use Meraki MR/MX stuff at our office and are generally happy with the value & service. The MS stuff though, thats another story. Do you guys have plans to enter the sub $2K tier with L3 devices?
I haven't worked at Meraki since 2015; sorry, can't help you out on that one.
I will note that as of 2015, "L3 switching" (i.e. hardware-accelerated IP routing) hardware was expensive as hell. I believe that on the software side, dropping new hardware into the existing hardware-routing infrasturcture is fairly easy, but I don't actually know because I didn't work much on MS hardware.
So the question for becomes: is there just not a good enthusiast market for this stuff? I have met a number of people who are "network nerds", so I'm inclined to think the market does exist. With any of the plethora of consumer devices (Linksys, Netgear, D-Link) it's a dice roll whether your gear is complete garbage or not. A lot of the time, you're coming up snake eyes.
I've got some Ubiquiti gear I bought a couple years ago. Like you, I want good quality gear that I can manage myself. I don't need a bunch of fancy corporate garbage, like link aggregation or cloud management. Give me solid, hardware accelerated routing and switching, flexibility over my local DNS, and maybe some VLANing.
I was running Linux on a small x86 box as my last network router. Maybe it's time to get back to that. That or go back to banging rocks together. Haven't decided which, yet.
? So the question for becomes: is there just not a good enthusiast market for this stuff? I have met a number of people who are "network nerds", so I'm inclined to think the market does exist.
my experience as a professional "network nerd" is that most other people in the networking field run cheap/second hand enterprise gear fetched from their employer at a major discount and simply seem to care less about wifi in general.
A lot of that changed with my peer group either due to caring about managing from a phone or caring about power/noise. The latter are especially not things real enterprise gear tends to optimize for.
The wireless is something for guests, and is hacked together with something you know works with an open router OS, or something off-the-shelf on an isolated VLAN.
That kinda thing yeah, at least myself and other engineers I’ve compared notes with.
I picked up a pair of Aruba 3200 controllers and a bucket full of APs on a local auction site for a song years back, still does me fine. Then again, not caring about the fastest latest standards is key, if you’re chasing current gen the enterprise stuff is unaffordable. You do need the appetite for a bigger power bill, mind.
I can't imagine that there isn't a market for this. Look at the number of people recommending Ubiquiti stuff to each other. There are entire YouTube channels dedicated to it. If your whole living space or small office can be covered with a single access point, get a 3-in-1 combo that has a WAP, a router, and a small switch. But if you don't, you are left with, what exactly? There is also some demand for mesh stuff, for people who rent and don't want to run Ethernet cable.
My plan: OPNsense on a PC Engines board for router + firewall, an unmanaged PoE-providing switch for switching, and something from 2-8 WAPs for indoor/outdoor Wi-Fi.
There were/are some performance implication of pfSense/OPNSense on these boards specifically. It seems like this has improved significantly in FreeBSD 12+.
> APU2, APU3 and APU4 motherboards have four 1Ghz CPU cores, pfSense by default uses only 1 core per connection. This limitation still exists, however, a single-core performance has considerably improved.
I can saturate 1GB/s with no problem OoB on Debian/OpenWRT on APU2/3/4, ymmv
I had a PC Engines board for awhile and I really liked it, but make sure the one you order can support your internet bandwidth. When I upgraded to 1 gig internet, I was pulling around 450mbps on my PC Engines apu1d4. I ended up getting a Ubiquiti Unifi Secure Gateway and then I was able to pull the full 1 gig.
It's pretty hard to recommend Unifi based on how they handled this breach, but the hardware itself has performed very well. Hopefully the new PC Engines boards can accommodate your needs.
You can connect the Google mesh routers together with Ethernet. I’d guess other competing products will do the same. It’s cheaper and much simpler than a full Ubiquiti setup for a few access points.
It's got a quad-core i5. I run Proxmox and virtualize VyOS as a router, Home assistant, and a couple of other small things like an https reverse proxy for various services that I like to access remotely.
Went this route after my old OpenWRT router couldn't keep up with gigabit WAN. This box has no problems doing so, and even does WireGuard at near wire speed.
There are a bunch of similar units available on Aliexpress, as well as 1U units with x86 CPUs and SFP ports for 10GbE, etc.
They’re small passively cooled embedded x86 machines. They haven’t made the jump to 10GBit, and their newest model (the apu2) is getting pretty old. However, they have very long production timeframes (many years) for each board config, which leads to stability over time.
as you said, it's an embedded solution, and it's cpu power is borderline for gige speeds, if you want more than the bare minimum (fw/nat) like qos, dpi or some virtualized services.
I have an ER4 which works for now but plan to go down the custom route once the ER4 is unable to push packets quickly enough. My hope is that VyOS/DANOS is sufficiently stable by then to run as a VM on say a Odroid H2+ replacement (or something similar)
Does this type of setup support a mesh network with multiple APs and SSIDs, VLANs, etc? I have never seen a PC based all-in-one interface that supports all of these things the way Unifi does...
> So the question for becomes: is there just not a good enthusiast market for this stuff?
No. They just don't want to serve the low end. I'm from SK, Canada and the vast majority of all businesses are small businesses. This site [1] says 98%. The problem is they only account for about 25% of the GDP, so vendors don't consider them worth serving. Everyone wants to sell to the 2% of the businesses that make up 75% of the GDP.
There's a lot of money to be made in the small business sector. It's just not *enough* money for huge tech companies.
I've thought for a while that the neglect of consumer, prosumer, and small business computing is a side effect of concentration of wealth. A small percentage of businesses have all the money.
I do casual work for a person that serves that sector. It’s 100% self serve for us. We’ll pay fair value for stuff and vendors won’t ever need to interact with us. The problem is when those vendors think their firmware updater is worth a $10 / month subscription. It’s not.
For example with pfSense going closed source we’d be willing to pay around $100 total lifetime cost to put it on PCEngines hardware. We can build that in to the upfront cost of the device. I wouldn’t be shocked if they try for $50-$100 / year which won’t be economically viable for our market, so instead of getting $100 / device and never interacting with us, we’ll end up moving to a different product. I really hope they come up with an offering that’s appealing to the small business sector, but I’m not holding my breath and I’ll be learning opnsense as a contingency.
As a former enthusiast in this area, I need the time for other more pressing interests and have reverted my home network to Eeros pinned to an IQrouter. All of them require some central service to operate, and I rarely if ever have to pay any attention to them. They also provide better coverage and less radio interference than the prior gold standard, Apple Airport devices. The IQ runs some sort of ssh *nix variant and the only time I’ve ever had to call Eero support was to turn off 5GHz for a minute^ to pair a smarthome device.
Still, it’s nice to have a hobby, and if you’re looking for one, run your own, sure! No shame in that. But it’s no longer necessary, and that’s pretty swell to me.
^ I agree with why they don’t make that accessible to end users: because people will uselessly fiddle with settings knobs to feel empowered, knobs like “separate 2.4 and 5 networks” (which breaks roaming and makes users incorrectly blame their WiFi routers when PEBCAK is at fault) that semi-expert users feel qualified to mess with, and lazy technicians will use to create “guest” networks that don’t offer protection and perform miserably due to being locked to 5GHz.
Maybe you and I have different opinions of "enthusiast" in this context. There is really only so much you're going to do on a home network. You set it up and once it's going, it requires very little maintenance. I would not consider running my own network gear a "hobby" any more than I would consider restaining my deck a "hobby". It's largely a one-time project.
I do have requirements beyond what the typical consumer does of their network, like PoE to run a couple of access points, PPPoE so that I can put my modem in bridge mode, the desire to configure extra DNS records, dynamic DNS since my home IP changes. Oh, and let's not forget some filtering/rewriting capabilities so that I can force modern smart TVs to respect the DNS server I provide them.
My network is much more usable having put the time into it. Yes, you could buy some off the shelf thing and get an OK experience, but that wasn't good enough for me.
I used to do all of those things on homebuilt FreeBSD routers for a commercial ISP we built and ran for a few years back in the day, and now I do them on my off-the-shelf router so that I don’t have to maintain the OS or link-shaping, I just click Update Now once in a while and it autoadapts to local congestion.
All of these features are available out of the box and have a GUI intelligent enough to offer a text area for adding filtering/rewriting commands that exceed the GUI’s remit. I used to have to hand-build this. Now I can plug and play it, and end up with the same experience as someone who built their own server and OS, using the same open source components as they would.
Total time invested, 8 hours over 5 years. I’m content with that exchange, and it has come with the only drawback being “it cost money to purchase the router itself”. I could DIY for less expensive in dollars and more expensive in hours. That’s the hobby-or-not choice, as I see it.
I do not decry those who invest time instead. Good, do so! I invested thousands of hours of my life into DIY of this stuff. It was invaluable experience, but it’s no longer mandatory to DIY to get a great experience indistinguishable from DIY.
I'm guessing that they're just not interested in making infrastructure products anymore, only the client devices. Airport is discontinued, all backend/server devices are discontinued.
They do sell mesh wifi products from Eero, Linksys and Netgear on their shop, but I don't think there's going to be any Apple-branded network gear anytime soon.
Check the Openwrt table of hardware[0] for a well supported device, and you're good to go. Seriously, there is no good vendor software in this space, but the consumer hardware can actually work fine with better firmware.
Generic Linux or BSD boxes are ok as routers, but they're not the best switches since they start taking up a lot of space if you need a bunch of NICs.
OpenWRT. Been using that in my home net for the past 12 years or so, on multiple generations of various hardware.
The latest incarnation on linksys ea8500 is slightly bumpy (seems like a kernel crash), but didn’t get annoying enough yet to hook up the serial console and get into kernel bug hunting, yet.
I have about a dozen VLANS that are distributed between different SSIDs and a few L2 switches for wired; bonjour gateway/filtering for the stuff like AirPrint.
Ive seen someone have a fair bit of success with Grandstream AP's. The controller runs on an AP itself or on their router if memory serves me right. I believe they are also moving into the switch market later this year.
Me too, but not really an alternative - the original tomato isn’t even updated any more, and it’s only configurable in its web ui, so it’s really only for home use.
Garbage was a bit of an indulgent word. It certainly is relevant and useful technology. It just isn't useful for home users, at least none that I've ever met.
It is as useful at home as it is anywhere else. Failures just cost less at home.
All my switches are bonded to one another, and it was handy when something snapped one of the fiber runs. That side of the house kept connectivity until the weekend when I could crawl around and run a new cable. (Never did figure out why it broke, though. Guessing the house shifted in just the right way.)
It would have hardly been the end of the world if I had to wait, but if your kit can do it, why would you not?
I mean, sure. If you have the capability and the inclination, go for it. I live in a house that is quite large and I can't come close to fully populating a 24 port switch in a useful way.
I would not detract from your network going the extra mile. I suspect that for most people, the value-to-effort ratio of link aggregation just isn't there in a residential setting.
Look into Mikrotik hardware and OpenWRT. Of the Mikrotik-based hardware I'm familiar with, they support PoE. OpenWRT supports roaming and mesh networks, and is a local solution, as opposed to a cloud-based one. There are no licenses you need to pay for, either.
Mikrotik is amazing, for what you get. But of a learning curve but worth the effort, I've seen large scale wireless networks crossing mountains with their kit.
I setup a small wisp using mikrotik kit for a few neighbours, it worked well in the end, but the learning curve was immense unless you have a strong networking background. I'd setup and used openwrt before for a domestic router and this was another level of complexity to get basically functional compared to that. Thst said the level of customizabilty and scripting (albeit in a weird language) you can do is immense, so for a true power user with a lot of time on their hands, it's a good option
IMO using what we have intelligently is easier. Uniquiti hardware has the Edge line of routers and switches that are not cloud-controlled, not listen on any ports, and not establish any connections on your behalf.
The only routers vulnerable to that exploit were routers that were deliberately configured to be open to the internet, no router with the shipped default config was vulnerable. The vulnerability was patched out in a bugfix release months before the exploit happened, so additionally it was un-updated routers at risk.
That's something entirely different from what happened with Ubiquiti.
True, I bought it because of the 10gb ethernet and youtubers recommending it. I didn't realize it was also a router with a 45 dollar license key.
https://mikrotik.com/software
many people switch not simply for the security/security-theatre, but because they no longer want to support a company with such poor security strategy after it is revealed that they have internal issues.
They all do though. And if they don't, they're all at risk to. The best you can do is make decisions that reduce dependence on them for when they fuck up. That's why I went with the edge router line to begin with. I've already planned for this situation.
like actual cisco-brand ones, or cisco compatible ones?
i checked my order history, it looks like ipolex and 10gtk 1000bT copper modules have had troubles in my mikrotik switches. the mikrotik brand works fine. and every 10G fiber module i've tried has worked (lots of fs.com, and i think 10gtek, and probably some other brand off amazon)
No, TP-Link's Omada controller can be run locally, I do that at home and at my parents' house. It is not cloud-connected unless you turn that on. Runs surprisingly well on a Raspberry Pi 2, actually.
I've got a setup similar to what you're asking for. The TP-Link APs (AC1750, AC1350 and AC1200) support PoE, they're in a wireless mesh, support roaming, and all configuration is handled with one interface, no cloud involved.
Just make sure that what you're ordering says it supports Omada. They still ship a lot of SMB gear that doesn't, but all the basics are there now.
Only been using it for a few months but it's been good. I moved the config I mentioned above (the three APs) to my parents' house and they haven't had any problems. Throughput in their case is a little limited but that's expected with the installation (no ethernet and a lotta walls). Hasn't needed a reboot or anything.
I just started using an EAP660 HD[1] at home a week ago, so far so good. Haven't topped out the speeds yet because nothing in my house can take advantage, but I have some AX200 cards coming. I understand there's a throughput bug at the moment that's going to be solved in a future firmware fix[0], but my clients don't go fast enough to hit that yet. TP-Link seems to very actively update their firmware for the pieces I've been using, FWIW.
So I've been pretty happy with it so far. Roaming has been fine, though in one case I think I had non-optimally located a couple of APs because my Linux laptop kept rapid-fire flapping between two of them. I believe that's a client-side problem, though.
I did try a Cisco 240AC and its wifi performance was rock solid. The management interface is non-cloud, and I believe covers the whole network, but it lives inside the AP itself, which I don't love. The management UI is buggy and they seem slow to push bugfixes, and when I added a 142ACM to extend my network it started going flaky -- I had to do a factory reset/reconfigure of the 240AC to resolve it, then it happened again a few weeks later -- so I'm gonna flip my Cisco stuff on eBay. :-(
[1] Tip if you adopt one of these in Omada: You need to give Omada the EAP660's password (default "admin"/"admin") for it to successfully adopt. The other APs never required a password to adopt, so it was a little confusing until the internet came to the rescue.
I bought 3 EAP330s and TP-Link deprecated them after a year or so. No more firmware upgrades for their (then) top "enterprise" access points. Rumour says they weren't happy with the chipset, so decided to abandon them altogether (just this model, cheaper ones were on different chipsets and support was available for longer). Last time I checked there was no OpenWRT support of any kind. They did hang when I had port aggregation enabled and seemed to run rather hot. But feature-wise and non-trunked-networking-wise they were fine, supported what I was looking for, no cloud, I didn't even use the controller, you can just manage them "the old school" way. But don't count on years of support.
For what it's worth, we've been running about 15 TP-Link EAP225 in a warehouse without any hiccups so far. Most importantly they don't randomly die or lose the controller pairing like some low end Ubiquiti units tried in the past. The only quirk is that on Windows Server you have to configure the service manually, but it's no big deal. [0]
I also have a TP-Link Omada setup. For layer2 networking with switches and AP's it's fine. Cost effective, reasonably stable, acceptable performance and features that are regularly used are all there.
The layer-3 stuff however is still early days and I can't recommend getting the secure gateway at this time. No IPv6 support. Depends strictly on an internet uplink configuration for default route to which all traffic is then NATted. Can't change that. No real security features, no packet inspection etc. The routing features really feel like an alpha version. They are working on it and have a roadmap to a more workable layer-3 solution. So maybe in the future the will be as nice as the Ubiquity solution.
Cloud is not needed but possible. You can get an OC-200 controller for not much money that fills the role of single pane configuration webinterface. The software for that controller can also be downloaded for Linux on PC or ARM if you want to use your own hardware. Also the network keeps running if the controller is down.
If you login to the OC200, it's under settings > cloud access. It should be off by default. Or you can login to the cloud interface and forget the OC200 under actions.
I run a similar setup with a bunch of EAP-225 APs controlled by a local instance of their Omada software (running on x64 rather that on ARM).
I've been very happy with roaming/throughput/reliability generally. The EAP-225 is 2x2, which they don't readily announce. Their newer and more expensive units are available as 4x4. That being said they're so cheap, I've been happy just to throw more onto the network.
For the software to manage them it uses some kind of multicast identification scheme to find new APs. If you're on a different subnet then it won't be able to automatically see them. They have a tool to connect to the AP and give it the management server IP, but that's Windows only.
The other option (that I went for) is just to create a management VLAN (good practice anyway) that the controller and APs live on. This is specifically supported by the APs.
Great without it. The major improvement I noticed with it, is 802.11k & v (faster handoff).
Without those, it takes a little longer for the device to switch APs at the borders of their coverage. Mostly imperceptible, but the longer handoff times can be enough to kill a phone call over iPhone WiFi calling
As a US citizen, I would love for there to be a reasonably-priced US-made alternative. I guess Netgear could be one[0], but their Insight management system is cloud-only, isn't it? Happy to be corrected.
I think I'd rather take an ostensibly-offline controller from China than a cloud-enabled one from the US, though I'm not really happy with those options. :-(
Are there some good options I missed? Would like to hear about them, if there are any.
[0] I expect their hardware is made in China, even if their controller may not be.
It's a sad commentary on how low the bar has been lowered. "No, you're system isn't secure, but the people that can access it can't really do you bodily harm" is not really the level I would hope we are trying to acheive.
I'm not sure what you're calling conspiracy theories since it looks like the GP edited his content, but if you think China is not exfiltrating data from hardware, let me know. I'll provide you with copious references from the recent past. Sure, the US is doing it, too.
I certainly think they do for businesses, but worrying about state actors attacking your home network is kind of pretentious until they actually do it. Are you that special?
The comment was something about how if you get the FBI mad they'll fabricate a drug case against you which somehow involves hacking into your home router or possibly subpoenaing your ISP.
If the favorite color of hat for you happens to be black, then sure, why wouldn't the state actors being looking for you? If you've done some stuff that involved using credit cards that didn't belong to you or any other of a myriad of things on the FBI's list of things you should not do, then they will be looking for you.
And the NSA was known to be intercepting router shipments to international customers, injecting their backdoors, then re-shipping the modified hardware:
I have a Turris Omnia for my main router. It's a solid piece of kit.
The OS, TurrisOS, is based on OpenWRT and for a while they were having trouble keeping up-to-date but that's been sorted in recent releases.
There are great features like auto-updates and BTRFS snapshots and the ability to rollback to previous known good if you screw up a config. I also run LXC containers on it for things like PiHole (not on the internal flash but the main board takes an M.2 SSD).
The Turris MOX is a modular Turris system that you can assemble from the parts that you need.
I have a small Gl.iNet router upstairs flashed with upstream OpenWRT that I use as a WiFi access point and have setup 802.11r for BSSID roaming. Have been using this setup for months and handoff has been completely transparent.
These guys burned me so hard. Something on my Omnia burned out. I offered to pay to have it shipped and fixed and shipped back. They stopped emailing me back. It was a horrible, horrible support experience.
It's a shame that Mikrotik doesn't have a easy to use global GUI.
It's the right hardware, and great firmware and wonderful flexibility - but it needs an easy to use GUI controller to make the simple stuff easy to take over from Ubiquiti.
These recent posts about Ubiquiti have made me look again at MikroTik. Their hardware is more affordable than I had remembered. Is there any good intro to their hardware - there are certainly a lot more options than you get with Ubiquiti.
Even before now there are some limitations with UniFi that have annoyed me. Setting up more complex DNS and firewall rules requires editing the JSON config. IPv6 tunnelling isn’t well supported. The stats in the controller, whilst neat, aren’t very useful because they have to be manually reset to zero.
The benefit of the GUI is that it documents what has been changed: in the GUI there is a list of port forwards.
With the CLI you either need to document it yourself, or you need to know to query if there are any port forwards. That can be a problem if there is more than one person responsible for the network, or if someone else needs to inherit your setup.
Documentation of configuration sometimes isn’t an issue on your own home system because you generally have a high level memory of what changes you made and their purpose. Conversely I still struggle sometimes with Ubuntu because I customise my configuration using command line tools, and I find keeping track of those changes or the implications of those changes is difficult.
Yup, very nice router/switch. If anyone could forward a properly documented configuration to make the Apple AirPort guest network work I'd be ever grateful.
The best intro really is to buy some of their hardware and play around with it. Their routers and APs are all based on the same basic RouterBOARD hardware and run the same RouterOS. The specs for each device is pretty well laid out on their site, but you do have to read through a few product pages to find exactly what you're looking for.
I would start with a hAP ac², a wireless router that is approximately the equivalent of their hEX Ethernet router plus a dual-band AP (cAP/wAP ac). It's a great standalone device and less than $70, or you could get the individual devices for a bit more flexibility.
Avoid the models labeled "lite", those are low-cost versions with lower routing speeds and 2.4GHz WLAN only.
For management you can obviously configure each device separately, or you can use CAPsMAN where one device acts as the controller and handles all configuration. It's not as slick as Ubiquiti, but it works.
I use the edgerouter line for firewalls, and unifi (running on a local "cloud key", with cloud login turned off) for only access-points and some switches.
This news (covering up, legal overriding good security practices) is super concerning though, and I'm definitely going to start looking around as well.
Yea. I only have an edgerouter 4 as far as Ubiquiti equipment goes. It works great for its intended purpose (I needed a dual WAN router and consumer level gear generally doesn't do that). I was eyeing their WAPs, but I believe I'll pass on them now.
Global UI? You mean, AWS-hosted configurator for your network? We just had example of it being security risk. God save Mikrotik from implementing something similar.
That's basically what MikroTik CAPsMAN is, depending on your needs.
I think it's specific to Access Points, so not a general purpose centralized controller for MikroTik equipment, but... centralizing access point management seems to be the main thing under discussion here.
CAPsMAN is a royal PITA to set up. You have to manually add all the wifi channels, map each AP to the channels it'll use, and a lot of busywork. Once it's set up, though, it works fine, and lets you upgrade all devices from the manager, etc.
nothing stopping you from using a local ubiquiti controller though. you aren't tied to their servers if you don't want to use them. that said, they seem pretty problematic from a security standpoint based on these leaks and your networking infra should be rock solid.
Winbox is a really nice remote controller for Mikrotik & vulnerabilities of a shared global controller have just been clearly demonstrated, so I don't see an issue.
Not really. The vulnerabilities of using a vendor hosted cloud controller have been demonstrate, but having one yourself next to your networking decides is just as secure as it always was.
That's how I run it, but it seems they are now pushing ads to local controllers and between this and deprecating recently released devices, I just completely lost trust in them.
Small correction - if you don't have a product that would display stats in a portion of the "single pane of glass" control panel, it displays an ad instead of a "you don't have this product, no data to see here".
Scummy? Sure ... especially if you don't have a Ubiquiti gateway but only AP's so the top part of the page is blocked out, but it's not exactly "pushing ads at me!" in the traditional sense - e.g. they're not targetting ads, they're not collecting data.
Protect still needs cloud to be activated for authentication it seems.
I used to have remote access turned off and accessed the video streams via the iOS app when my phone was on VPN to the local network. That no longer works. Remote access (cloud) needs to be activated in order for the iOS app to work, no matter if you are on the local network or not.
i've run my own controller locally for years without forced cloud login.. i've never used the ios app, what can you do from it that you can't do from the web interface?
He said Protect, which is only part of the newer Gen2 cloudkeys (controller + video surveillance). The app just lets you manage the basic config of your devices and see network stats. There is a separate app for viewing your security cameras via Unifi cloud.
He said Protect, which only comes on the new cloud key gen2 devices and requires a Unifi cloud account. The old stand-alone controller (key or installer) does not unless you tie it to your Unifi cloud account.
I have a Unifi Dream Machine Pro with cloud access turned off-- the setting for it (since the UDM Pro makes all applications accessible via the cloud, not just Unifi Network) is in the device settings rather than the Unifi Network controller settings.
When they introduced callhomes/telemetry sometime in the 5.x code i blocked their known DNS entries and then setup firewall rules to block all internet access outside of the Ubuntu Repos..
As far as I know, TP-Link doesn't require any cloud based service, or even a local controller. They can work fine without any of it and you just manage them locally/directly.
I've never had good luck with TP-Link hardware though. Constant crashes/disconnections once you get past a few devices on the network, mysterious failures, hardware quickly getting dumped into the unsupported list, and so on. I've sworn off of them entirely.
Yep, this is what I do. I used the EAP245 and now the EAP 660 HD. Both were rock solid devices. Managed locally via a web browser. Plugs into a netgear switch, into a pfsense router.
You're conflating "NSA secretly rerouting shipping company deliveries to end-users, installing their firmware, then senting it on" with "Cisco willingly did that".
Cisco was unaware, and once aware (thanks to Snowden), Cisco took steps to try to prevent it, by altering shipping destinations, at the last minute, on route.
We ban accounts that post like this. Please review https://news.ycombinator.com/newsguidelines.html and stick to the rules from now on. We've had to ask you not to post in the flamewar style to HN before, so this is a big deal.
So, while this whitepaper is news to me, how is this an "NSA backdoor".
Reading up on this, it sounds like
* it was required, much as with phone tapping, by the US gov
* ergo, ISPs needed it, were mandated to have it
* therefore, CISCO implemented it
* this protocol was for lawful intercept. Police, FBI, everyone.
While beyond annoying, this is not a back door for the NSA. Nor is it even secret. Before you get all pissy, you should at least state fact as fact. Not exaggerate. Not make it about a specific actor, when it isn't. And not during a whataboutism.
If your goal is to let people know, I assure you, spouting unvarnish, direct truth will help a lot more.
Nowhere is it said this was mandated.
That’s your assumption not supported by evidence.
So let’s run through it.
Cisco writes white paper supporting LE back door access.
LE/IC use hard coded back doors as revealed in the Snowden and Vault7 leaks.
You’re saying it never happened, ever.
Maybe you’re right (you’re not) but you spoke so firmly!
Do you know something I don’t?
In 2005 the FCC ruled that CALEA applies to broadband Internet providers
So yes, it was mandated. You may disagree with the ruling, but ISPs were required to do something, and Cisco enabled this on products for ISPs. Did they have it beforehand? Yes. However, this product only existed on certain products, and other countries required this before the 2005 FCC ruling (again, from IBM white paper).
But of course, this still isn't "Cisco put in back doors for the NSA". This is "Cisco putting in back doors for law enforcement, including even local police".
Further to that, everyone was aware of this. You can't have a 2010 white paper by IBM, before the Snowden leaks even(2013), if it was secret. And realistically, a "back door" isn't quite that, if it is well known. It's just another access point in a product.
Secondly the 'Snowden' leaks, which had everyone quite pissed, including Google (whom I hate, but...) starting the big push for SSL everywhere, were not caused by these specific back doors.
Heck, this white paper is from 2010, and this 'law enforcement' "back door" was well known, AND!, not in all Cisco products! How, then, could Google be surprised by this revelation. That this back door existed?
How could anyone?
It was not a secret. It was not in all products.
No, Cisco routers were infiltrated in two ways. Undisclosed vulnerabilities, which the NSA was aware of, and used against all router vendors to install NSA malware. And again, by intercepting shipments to end-users, installing NSA backdoors and malware, then resealing and shipping the product onward.
This is what the Guardian Snowden leaks talk about!
The big differences between China(and your whataboutism), and the US, is that if you don't let the Chinese government into your company, do precisely what it says, and install all the backdoor software it wants?
You don't have a company any more, your freedom, and maybe even your life.
Meanwhile, the NSA, has been acting illegally, and does NOT have the support of US tech vendors. In fact, US tech vendors are hostile to NSA's attempts to subvert their products, including lobbying US politicians to stop this sort of behaviour.
There is a vast difference between these two things, and in all of the above, Cisco did not willingly put "back doors" in anything for the NSA.
So in reasons to your question? Yes, I know something you don't.
History. Factual, actual, history. Not revisionist.
I'm happy to re-examine any of this, if you can provide links to data showing Cisco allowing NSA agents into its midst, and installing NSA spyware for its products at the factory. On purpose. Which aren't open, and were hidden from everyone.
Or something similar to this.
Because otherwise, your statement is absolutely, positively, not factual. How can I say otherwise?
And yes my original response was firm, because I've seen others say this sort of thing. We must be factual in our claims, not hyperbolic!
What IBM white paper?
Show me the law where this was mandated.
Because no, you are in fact misrepresenting the truth.
So, your I agree with you in not being hyperbolic.
However, let’s just say I have exceedingly applicable industry experience. (IC and LE)
I know beyond a shadow of a doubt that I’m right.
So now my burden is finding what I can in the public domain to share this truth with you without violating NDAs.
Btw, with respect to your 'show me the law', 'mandated' doesn't mean 'legislated'.
That very same IBM whitepaper you cited, claims the FCC mandated it. As in, pushed an interpretation of a regulation. Are you claiming the whitepaper is wrong?
The whitepaper which you used to validate your claims?
Or, are only the parts of it which you agree with correct?
As far as the white paper, I mixed up Cisco and IBM in my head on that.
As far as “mandated”, laws and policy mandating back door access have been shot down repeatedly in the real world.
The claim of an FCC mandate in a white paper does not indicate legality of deployment in the real world is what I mean.
TPLink newer stuff wasn't supported and wasn't going to be DD-WRT for a while there so check first. They have a crypto blob for the radio binary, or the entire firmware system they the group would need to trust blind and not be able to adjust settings with, or violate the DMCA to reverse engineer.
Don't know if this is the same case still or not, but they did this for FCC compliance around the time 802.11ac was launching. That might have changed that though I'm not sure, I stopped considering them at that time.
Also a good company to look at would be Microtek, I have heard good things, but haven't looked into them directly.
Mikrotik, but unfortunately getting reasonable throughput for wireless clients is a serious challenge (I always have better results with openwrt on the same hardware). Still, nice to have local control and not have to rely on some cloud service just to use the hardware I bought.
Using 80Mhz channels I found the default configuration never exceeded 200Mbit/s using iperf. For me "reasonable" is closer to 800Mbit/s, which is roughly the theoretical limit for 80Mhz with 2 spatial streams. I run my tests with my devices sitting 1 meter from the AP. This is on a hAP AC, and like I said, I get much better performance (close to the theoretical max) running OpenWRT on the same unit. I have had similar issues with the RB4011 and cAP AC, and in both the NYC area and suburban Virginia (so it is not just an issue of spectrum crowding in the city).
Yeah, that sounds a bit slow. I suggest checking if faspath and fasttrack is working.
I remember that when I had hAP AC using firewall rules inside lan, it also did not go much faster. Good indication was CPU usage. If it used 100% CPU at ~200Mbit/s then it was firewall slowing things down.
> Does anyone have a decent WAP where I can use PoE
There are PoE devices with OpenWRT support[1] and should be possible to enable 802.11r if they have the support. They can be managed locally even with self-signed certificate.
I use OpenWRT now and would really rather avoid it. I want a central controller, not having every AP have its own UI. Plus firmware updates area always an adventure.
To somewhat eliminate the chances of adventure, I’ve profiled the setup for each of my many OpenWRT devices and created unique profiles for them in a (reasonably) simple Git repo[1].
All I need to do to get device-specific firmware is to update the OpenWRT version-number in a single makefile and the rest happens automatically.
I’ve even setup Github Actions to build the firmware for me (basically, run make), so I can even get/build new firmware from my phone.
I’ve yet to have any issues when flashing these builds. It used to be much worse when flashing the regular “official” OpenWRT image and restoring packages afterwards.
Couldn’t be simpler! (With the regular Linuxy you-have-to-build-it-yourself-first clause)
About 5 years ago I would do the same thing. I want to set it up such that if I with the lotto and move away, the rest of my household can continue using the system without having to learn a CLI.
I don't know about you, but I "automate the old-fashioned way" at my day job, I want the damned thing to just work without me bothering with "SSH access and CLI tools" at home.
For those people here saying "go Ruckus unleashed" ... caveat emptor my friends !
I have it on very good authority that Ruckus have started rolling out a change in their pricing model to require a Unleashed license per AP to operate, a move which obviously increases costs to the end-user.
Some people might say its a deliberate move prevent cannibalisation of their main business model by nudging people away from Unleashed. I couldn't possibly comment.
My earlier comment was based on a change of policy which happened around 1st March, and any Unleashed quotes as of 1st March (and the two-weeks prior) need to be re-quoted for the new "license per AP" Unleashed model.
I've been a bit busy with other work since that bombshell dropped, but if I get a moment I'll try to dig up some pricing.
The other thing to note is feature discrepancy between Unleashed and standard. Perhaps of most interest to your average HN contributor was (the last time I checked) IPv6 was not supported on Unleashed firmware, and not much sense of urgency (if any !) to rectify that.
Thanks! I completely glossed over the IPv6 thing... At home I don't get native IPv6 from my ISP, so I just tend to forget about that. Although it would be neat.
For me I bought my AP on eBay and just plopped the standalone Unleashed firmware on it and that's all seemed fine. In what I see there's nothing changing? But it sounds like you're running a /much/ larger install.
Actually (and ironically given the context of this thread !) the reason I found out about the policy change was because I was helping someone out who was looking to dump their Ubiquiti kit and realistically it looked like Ruckus was going to be the only sensible option (despite the already unpalatable price premium before the new policy).
As you may or may not be aware, Ruckus have an "all quoted" policy, there is no price list per-se.
At the time I was working on the project (late 2020) Ruckus did have a promotional activity going on where you could buy Unleashed kits at fixed prices without quoting.
However due to various technical questions that were coming up (e.g. IPv6 support) we missed the window and it was uncertain if Ruckus were going to extend the promotion.
Ruckus did extend the promotion, at least initially (Jan-Feb 21') but then they switched to the "license per AP for Unleashed" and the promotion was killed off.
It was at at that point that my friend took the hint and dumped the idea of Ruckus and I went back to my normal work.
If I get a chance I'll try to find out what happens about second-hand kit. My guess would be that if you stay on old firmware there's not much they can do about it. Although whether its desirable or advisable to stay on old firmware is another question, obviously.
Without going into detail because, well, you never know who's reading ....
TL;DR "WatchDog End User Support" is now mandatory for Unleashed and is sold and priced on a per AP per year basis.
The pricing is not too scary (two digit figure per AP per year). But I'm told the requirement is (will be ?) enforced so its unlikely to be a case of being sneaky and paying the first year and "forgetting" to pay the renewal.
I'm a big fan of flashing OpenWRT on supported APs. You lose central management and setup takes time, but I'm very happy with the stability and no worries about cloud services or vendor lock-in etc.
I bought an R610 AP on eBay a few months back, flashed it with the Ruckus firmware (legally available to all from their site), and it does exactly what you want. On-prem only, no cloud, one of the APs will act as a controller/manager for the others, and they can all communicate via wired or meshing off of each other. One of them can even be a NAT thing if you want.
I think I paid around $160 because someone had a bunch of off-lease ones. But if you look up anything that supports the Unleashed firmware you'll be good. 802.1ax is the hotness right now, so the slightly older (but still work great) ones are a LOT cheaper.
I replaced a Ubiquiti setup with a Ruckus R610 and small fanless running OPNsense (Protectli) with a basic switch and POE injector and it's excellent. Sure, it's not single pane of glass for it all, but the AP is rock solid and OPNsense is a solid known quantity. I've got no regrets.
Same here, I ditched my Ubiquiti and went with Ruckus and I could not be happier. I'm just so sorry that I ever bought into Ubiquiti's marketing when I purchased their AP. The Ruckus performs so much better and the mgmt software is light years better than Ubiquiti. I also run a Protectli but on OpenBSD (from pfsense originally).
Get Linux boards and USB-3 WiFi dongles with well-supported chipsets and roll your own?
The other alternative is to go way up-market and buy industrial gear. Consumer gear is shit due to a race to the bottom mentality. 90% of consumers buy the cheapest. This is also what turned every TV and appliance into a feature-encrusted shitbox full of spyware.
> Does anyone have a decent WAP where I can use PoE, deploy like 5 of them and have them support roaming between APs, all managed locally? Is that too much to ask?
Not as comprehensive as Ubiquiti’s management interface but the CAPsMAN feature on Mikrotik routers and APs does cover this use case.
Look on ebay for slightly older models. R710, R720 should be $200-$300. Not a replacement at scale, but the one-off purchase from ebay is fine for home use.
Unfortunately, w/o firmware updates they are just little better than a brick. Especially for WIFI hardware where you cannot control who can access it - better keep your APs patched.
Aruba doesn't require a cloud controller, that's just the "Instant On" version.
I used to run Aruba Instant (not the "instant on", no controller), but gave those APs to a friend and now run an Aruba 7005 controller with 2x303H and a 324.
Support/Licensing costs are totally worth it for having trouble-free WiFi with no cloud dependencies (context: using and supported UniFi in various roles since the first UAP came out, and I think was free for UWC attendees, though I could be confusing that with their first camera), but am network nerd that's comfortable with enterprise wifi.
Edit: I got upvoted by somebody, but as an UI user I'm genuinely looking for an answer. If it's still possible to get inside if devices aren't connected to UIs cloud.
1. They are now pushing ads to their local controllers. That is a shady tactic. It also means the controller is phoning home. It means they might have an XSS in that code now or in the future.
2. They just deprecated a bunch of relatively new hardware. If I’m going to invest a non-trivial amount into their hardware I want to know it’ll keep working for a long time.
3. They lost trust due to this breach. How can I trust their code to secure my locks network if they can’t secure their own?
Also add that all of the SOHO equipment is garbage that drops connections randomly, crashes, or simply can't deal with some WiFi chips.
This is the reason I went with the Ubiquity UniFi 6 years ago. It was the only one I tried that didn't constantly drop connections or cost a fortune. But it's only G and I've been considering an upgrade, but there are no good options on the market that don't have stupid cloud management bullshit, are built on garbage hardware, or cost an arm and a leg.
Other than ubiquiti I assume you mean? Not that I know of. I want the old ubiquiti back where customers, not stock price and ad revenue, was the focus.
The TP-link offering looks very similar to Ubiquiti from a quick scan a month or two back.
Both will run from locally hosted controllers if desired.
I've been seeing more Cisco "Meraki Go" kit around as well, which looks to target the same use cases as Ubiquiti (very very similar gear, WAPs, low end switches & gateways), albeit without a local controller option, but at least without the usual steep Meraki subscription charges.
I know someone that works there and they seem pretty happy with the place and product. just saw the amazon link now though so that may be a detriment depending on your view of them. (I have never used their systems or anything so it's not really an endorsement but something to consider)
Not 100% sure if that's what you are looking for (I don't do much network works) but I think that Camsat's GlobalCAM-4.5G may be worth checking, with one catch: the company targets CCTV market. Still, that's just a router, without any special license fees or mandatory clouds.
Peplink seems pretty good; they do have a Cloud:tm: management offering called InControl2 but as far as I'm aware it's entirely optional. I've had good luck configuring everything via the local UI. My setup is a Balance Two + a few One AX APs.
Sure plenty of solutions out there, but its all going to be Enterprise priced. $600-$700 an AP, plus whatever is going to be the controller. In this space, you'll find cloud based options, controller based options, and standalone.
If you are willing to go this price range, I think FortiAPs feeding back to a Fortigate FW is rock solid solution. But a FortiAP-431F is $616. And a base FG60F as controller is $535 + service if you need it. And although you probably won't need repair options, support/maintenance is a yearly fee ontop of that.
Ubiquity was definately a unique company offering many of the enterprise features for consumer pricing.
I realize I'm a bit late to the party, but GL-iNet does this. They run OpenWRT, too! PoE support can be hit or miss, but being able to truly own my devices without compromising on features is amazing.
You probably want something like [0], which has PoE support and an optional Cloud connection. You can roll your own automation with (e.g.) SSH access since they are just Linux machines.
You're in the boat of deploying OpenWRT or similar low-cost APs presenting the same SSID on a shared VLAN, plugging them into your favorite PoE switch, and manually configuring their channel strengths, etc.
It isn't so bad if it's a one-and-done thing, but all of the out-of-the-box solutions are very IoT.
Enterprise solutions with your self-contained WLAN controller and APs (not including PoE switches) are typically pretty pricey (>$5k, can spend a lot more).
You can absolutely manage ubiquiti local. Even with a ridiculously named local appliance called a cloud key. Their cameras are unfortunately another story.
Is pfSense, vyos, stuff like that out of fashion? Or too hard to maintain? Automating that stuff with ansible should solve the central management bit...
Yeah, of course you can. It's just a freebsd with some configuration stuff on top, it can run hostap, switch, it can do lagg and span ports and all the other stuff you'd expect... not sure how common it is though
I bought some Ubiquiti gear a year ago (a pair of AC-AP Pros), and immediately after I got them I reflashed them with OpenWRT. Haven't had even one issue with them.
I get that people with larger networks would find centralized management useful, but I'm fine just managing a couple APs, a router, and a couple switches on their own. They're pretty much set-it-and-forget-it devices anyway.
Agree about TP-Link. I bought some Deco mesh kit for the house and am generally pleased with its performance. However the fact that I can’t configure them locally is a massive turn-off from buying the stuff in the future.
I use the TPLink forums to put local management in as a feature request. Perhaps if enough people make a noise?
Unifi cloud controller is optional, but they don't make it easy to figure that out.
Setting up a UDM first thing I did was add a local super admin account, then disable remote access. That way, if their cloud auth servers are down I'm not affected as I use the local admin account.
Maybe Plume Homepass: https://www.plume.com/homepass/ ? I'm not sure if they're 100% equivalent, but it seems to cover a good part of the Ubiquiti feature.
Interesting. Subscription-based services in the home seem like a disaster waiting to happen. Unless you can self host in the event of a company shut-down, you're beholden to a company and their solvency.
Can't see anything on their website for a transition plan in the event of shutdown (and of course, why would they post that and potentially signal lack of confidence in their longevity).
Ruckus seems pretty good. You can use their unleashed APs without cloud/controller/subscription. POE, and can connect up to 75 devices. I just installed at my hotel.
We had ubiquiti, but the power outage usually corrupts the controller, and requires constant resetting.
I have exactly this setup with three Aruba Instant APs (WiFi 5), but afaict they’ve combined the Instant product line with their cloud offering or something? I’m not entirely sure where they’re going with it, but I am very happy with the setup I have.
maybe their different product lines are managed differently, but all my Unifi WAPs, router, and switches are managed on a local controller that i installed and maintain myself.
i recall some features being locked behind a UBNT account, but that was only reporting-type stuff IIRC
you can build one but PoE might not be in the cards unless you want to convert the injected power back to a 5v barrel.
Alix makes a decent router board that can host Linux and dual PCI cards means 5 and 2.4 ghz AP's. the total would be ~200 for each "AP" but they would be pretty massively powerful.
That's awfully convenient for the company offering those products, but I want to control what happens on my network, even if that's inconvenient for some hardware vendor.
Case studies, focus groups, surveys and interviews are great ways to find the unknown unknowns. Of course, you need to pay people to participate in them, and then you need to pay expensive employees to conduct, collect and analyze the results.
It's often just cheaper to spy on customers, though, and pretend that there is no other possible way to conduct business.
> Case studies, focus groups, surveys and interviews are great ways to find the unknown unknowns. Of course, you need to pay people to participate in them, and then you need to pay expensive employees to conduct, collect and analyze the results
No they're not, because the vast majority of people simply won't be bothered, and most people probably aren't as reliable as concrete data.
People will be bothered if you pay them. DigitalOcean does this with focus groups for developers, and offers $500+ each for an hour or two of developers' time.
I was thinking of those as thing you do before product release (so they're "known"). But it's not a good way to find out about reliability issues, because those only happen in especially weird situations, or over time like running out of disk space.
Telemetry that tells you which features are popular is useful but does need filtering to avoid identifying individual users. But sending back errors and crashes is what's really important.
You can do things like have feedback forms but typically users don't like sending that in because they feel like they're doing work for free.
I have lots of devices that don’t phone home. Have been working for years. The company needing to know which websites I visit to make my network function does not speak well of the company.
PoE is probably Power over Ethernet. With that you don’t have to worry about laying down electrical line to power the APs. The APs draw power from the Ethernet line itself
Mikrotik is nice and does all of those things. Just needs actual expertise at network administration to set up. Once done though, it's fire and forget.
If you don't feel like configuring hostapd and dnsmasq I'm pretty sure there's an nmcli one-liner that will have network manager run a WAP for you. I use 'hotspot' on my phone all the time.
> Maybe putting your network control plane in 'the cloud' isn't such a good idea after all...
Isn't one of the major selling points of cloud-everything "How can you possibly secure your service better than BigRespectableCompany?" I know any time I bring up self-hosting E-mail or a web site or whatever, someone always comes out of the woodwork to remind me that I am not an expert in securing Internet services, and that BigRespectableCompanies have full-time employees dedicated to security. Surely I should be moving to the cloud for this expertise! This is sounding more and more like FUD to me.
Managed services with state of the art IAM policies are more secure than lifting and shifting a Linux box running whatever PAM configuring was setup on it in 2005.
Ubiquiti really aren't in the same ballpark as AWS or Microsoft, which are the companies people use that argument for, and you can bet your ass their security is better than in most places.
This is a fallacy. Just because these companies have great security teams doesn’t mean that things don’t fall through the cracks. Shit slips past the security team in product meetings all the time.
The claim wasn't that they never have security flaws, the claim was that they almost certainly have fewer security flaws than the alternative self-hosted solution someone named MastodonFan87 comes up with.
You may be smart, and have secured your systems properly, but someone with the same resume as you in another company might not be.
As your manager, how can I tell the difference between someone who actually did the work right, and someone who said they did the work right (and also legitimately believes that they did)?
You never can be... but you should already know that being a manager. But if you're the target of an advanced persistent threat. It doesn't matter how good your guys is, they'll win eventually when the next 0day no one knew about shows up. But then your cloud provider will have been broken into dozens of times already. Hundreds of companies have to do a security audit of all of their networks now* because Ubnt got, got. The only ones who don't are idiots, or not using ubnt et al.
So what, you are suggesting a strategy of staying away from large services and hoping that you won't be targeted?
I posit that it doesn't take burning a zero day, or a coordinated effort by the CIA, the FSB, and Randy Waterhouse to break the typical DIY self-hosted security implementation. (And that the manager paying someone to build it has no ability to tell between a great, a good and a bad DIY job.)
A network controller for local WiFi shouldn’t be reachable from the Internet at all. I’ll take a vulnerability ridden controller on an isolated management VLAN over cloud shit any day.
It's odd how the big cloud vendors have been able to escape criticism for being completely open by default. Other vendors have been taken to task and have adopted better security practices. For example, SuperMicro IPMI comes with a random password now.
It's extremely difficult to lock down an AWS account when there are a bajillion services, IAM policies, roles, etc.. I've been trying for the last few days and it's so difficult that I can understand things like this. I don't think it's acceptable, but I can see how it happens.
I think the expectation for AWS, Azure, GCP, etc. needs to change. Accounts should allow nothing by default and part of the tutorial / learning process should be understanding the permissions needed for each service and how to limit access to those services. As a bonus, they should show you how to configure Budget Actions to catch anomalies and runaway services. For example, I'm trying to set up my account so SMTP access to SES gets revoked for SMTP users if the message count exceeds a certain threshold. It's really, really hard because there's not a single document / guide that shows the process from start to finish.
The triangle says Confidentiality, Availability, Integrity.
While your concerns are 100% valid, we need to remember too that setting up access in restricted ways and inviting users to understand the protection and remove the correct barriers, or implement the concerns necessary to interact with those for themselves, always runs the risk that some users will find your protections cumbersome and instead find a (totally incorrect) way to baffle them, or otherwise even route around them entirely mooting any efforts to secure a platform.
And every time I hear this played out in conversation, the answer is "that's on them!" But it's clearly a balancing act, it's a trade off; tautologically, when you make the service less accessible then... it is, well, ... made less accessible.
Besides facilitation of the secure access also sales conversion ratios will depend on that accessibility. The crux of your argument stands, the defaults are too open, and we need to do more to ensure that naive users aren't handed a loaded gun to aim at their own feet.
Uhm.. in the AWS i've used, it's on explicit allow, and all of their docs and tutorials start with IAM and what's needed and why. What more do you want? I can't imagine IAM being simpler while being as granular as it is. You just have to actually take the time to learn about it, like every system. It's still drastically easier to use it securely than doing something on a similar scale and detail manually.
The hard part for me is figuring out how to disable access without breaking everything. I know it’ll be useful once I understand and I’ll take the time I need to learn it, but most people won’t.
I prefer the opposite learning direction. Start closed and open the 1 or 2 things I need instead of having to understand 1000 things immediately to configure permissions reasonably.
Have you tried Access Advisor in AWS IAM? It’s been out for a few years now and is specifically targeted at using “... last accessed information to refine your policies and allow access to only the services and actions that your entities use.”
Can you explain how IAM doesn’t work well with the “starting closed” approach? IAM authorization is “default deny” and every principal needs an explicit allow statement with the appropriate action before authorization will pass.
> Can you explain how IAM doesn’t work well with the “starting closed” approach?
It works ok once you do a lot of learning and read the best practices. I think a lot of people will skip that and use their root account for everything.
The biggest mistake I made was creating an admin user, but giving it too many permissions and using it like a normal user.
After learning more I use the root account to make an admin account, but I think the admin account should only use IAM to create other fine grained users.
So it works fine, but I think it would be better to force people into creating those first couple of accounts with permissions chosen by experts. It’s too easy to jump right in and start using an over privileged account.
You can use AWS Accounts like microservices. The biggest security walls in AWS are the account barriers. Those have to be specifically configured to cross. Sometimes (1%) its unavoidable, but if you have multiple services running on an account, you force yourself to weave arcane webs of IAM permissions crisscrossing all over to get what you need where. It's a terrible model that people inflict on themselves because it's how everything used to work.
Spinning up your own DB instance is also "open by default" and takes both effort and expertise to secure properly. I think it's pretty reasonable that there's a large surface area of IAM permissions when AWS offers a vast number of disparate services.
>If this is true, and whoever breached them had full access to their AWS account, can we really trust them to clean up all their tokens and fully eradicate all forms of persistence the hackers may have gotten?
This is the same for any breach. At least if you're using AWS, you know that your management tools aren't lying to you (as long as you assume AWS itself isn't hacked) and you can use those tools to cleanup. If you run your own machines, you can't assume your management tools work correctly. All your machines could have rootkits, all your tools could contain backdoors, and every attempt to cleanup might just be a fake veneer. See Reflections on Trusting Trust.
Full disclosure I work for a cloud computing company (but not AWS).
> can we really trust them to clean up all their tokens and fully eradicate all forms of persistence the hackers may have gotten?
The state of security in the tech industry is miserable. The only companies we should trust not to leak our data are those that never collected it in the first place.
We are certainly not having this conversation enough. I regularly chat with a risk office and she keeps telling me: Data minimization is your first line of defense.
Heck, most operating systems are leaky by default.
Even openBSD, which has a stellar trackrecord in terms of security and "goes against the grain" on many decisions for the sake of secure by default (for instance, disabling hyperthreading altogether to prevent any kind of SPECTRE vulnerability) is under constant scrutiny for not being secure enough.
Maybe connecting everything to a network and making it a high value target by collecting everyone's data is just a terrible idea in the long run.
I haven't got much sources for you but what I've picked up over the years: a lot of OpenBSD's security is just old fashioned manual code review and audits, and there are not enough eyeballs. Someone like Ilja van Sprundel can go in the source code and find a bunch of issues without too much trouble [1]. I don't see any concentrated efforts to improve the status quo (where's formal methods, where's automated fuzzing, where are initiatives to employ more safe programming languages, static analysis, etc.). And while OpenBSD pride themselves on their mitigations, they aren't exactly state of the art and some of the more recent stuff (like trying to eliminate ROP gadgets) seems just futile. The biggest thing OpenBSD did with mitigations was enabling them by default for the base system and ports. What does anyone remember OpenBSD for in 2010-2020? Pledge, probably. That's a nice thing but more for containing the damage than actually making stuff secure in the first place.
My concern (and the concern of many others, I think) is that if OpenBSD suddenly got enough attention from the wider security community, including people who actively look for holes that can be exploited, there'd be plenty of important stuff found. Until then, these issues sit quietly waiting for a malicious party to discover them. There's quite some fanfare for OpenBSD, but how many of you are actively auditing the code? I'm subscribed to cvs@ and tech@ and I read them daily and I just don't see much contribution at all from outsiders. And when I do see it, it's mostly stuff like fixing typos or amending man pages. All the commits that change code with security implications tend to come from the core developers, and are reviewed by a handful of people at best. And I have seen some obviously broken stuff slip through.
> if OpenBSD suddenly got enough attention from the wider security community, including people who actively look for holes that can be exploited, there'd be plenty of important stuff found.
This seems like a structural advantage to less popular software. If your software is less common, attackers will have put less time into exploiting it, and therefore you will be more secure. My impression is that MacOS and Linux both benefited from this relative to Windows for a long time.
In general this should be true if usage grows faster than security resources for popular system. It might be still be true even with significant, commensurate investments in security while you grow, because if a small percentage of users mis-configure the software and create vulnerabilities, that population will hit a critical mass with growth regardless of your security efforts.
Man I really wonder why the lack of proper 2FA is so wide spread?
Is it rally cost and complexity?
Or just missing awareness?
Or the lack of consequences when you get hacked in a way which could easily have been prevented (through then they might have attacked in a different way, tbh.).
It's people not getting it and being plain annoyed by the second factor. YubiKey or Authenticator app on a different device... it's too inconvenient and people often only do it if forced (e.g. banks do this afaik).
Every day I sit at the same desk, at the same computer, logging into the same websites, using 2FA over and over and over and over while sites time out "for my protection". It's a plague. Write a damn desktop app I can run locally, I didn't ask for people from Turkmenistan to be able to login as me, so you could sell me a halfassed web version of something.
Joseph Heller predicted 2FA in Catch 22 when he wrote:
"Almost overnight the Glorious Loyalty Oath Crusade was in full
flower, and Captain Black was enraptured to discover himself
spearheading it. He had really hit on something. All the enlisted men
and officers on combat duty had to sign a loyalty oath to get their map
cases from the intelligence tent, a second loyalty oath to receive their
flak suits and parachutes from the parachute tent, a third loyalty oath
for Lieutenant Balkington, the motor vehicle officer, to be allowed to
ride from the squadron to the airfield in one of the trucks.
Every time they turned around there was another loyalty oath to be signed. They
signed a loyalty oath to get their pay from the finance officer, to
obtain their PX supplies, to have their hair cut by the Italian barbers.
To Captain Black, every officer who supported his Glorious Loyalty
Oath Crusade was a competitor, and he planned and plotted twentyfour
hours a day to keep one step ahead. He would stand second to
none in his devotion to country. When other officers had followed his
urging and introduced loyalty oaths of their own, he went them one
better by making every son of a bitch who came to his intelligence
tent sign two loyalty oaths, then three, then four;"
Notice how 2FA turns into MFA? Keep adding FA until you're as secure as the security theater demands.
"To anyone who
questioned the effectiveness of the loyalty oaths, he replied that
people who really did owe allegiance to their country would be proud
to pledge it as often as he forced them to. The more 2factor logins
a person went through in a working day, the more secure he was;
to Captain Black it was as simple as that"
"Captain Piltchard and Captain Wren
were both too timid to raise any outcry against Captain Black, who
scrupulously enforced each day the doctrine of 'Continual
Reaffirmation' that he had originated, a doctrine designed to
trap all those men who had become insecure since the last time they
passed a 2factor authentication prompt a few minutes earlier."
Honestly Windows does this right with AD, Kerberos, Spnego
You login to a physical machine with a password (the machine is trusted on the network via AD so physical access is one factor and password is a second)
You visit websites and they use SPNEGO to land on Kerberos or NTLM auth which then bootstraps off the fact you're already authenticated to Windows. You never even need to see a login page
It's achievable with macOS and Linux but afaik there's some more configuration to be done. The only place I saw with a setup like that was a bank and it was part of a new technology stack that almost nothing used yet
With that setup there's almost nothing to phish if you can train people to only enter their password into the OS at login. You can pretty much eliminate the possibility of credential sharing but locking logins to certain machines
> Write a damn desktop app I can run locally, I didn't ask for people from Turkmenistan to be able to login as me, so you could sell me a halfassed web version of something.
He could have had 2fa on his console account but saved an access key for CLI access. Many large organizations have an infrastructure where you exchange your corporate authentication (including 2FA) for a short lived AWS access key, but AFAIK this isn’t out of the box.
This seems incredibly clunky and most people are probably not doing something that involves typing the ARN of their MFA device on a day to day basis. To be tenable on a daily basis you need something like “aws login” with username, password, and code that sets up your credentials file correctly. Expect people to copy and paste values around, and you’ve already lost.
Not to mention legacy code that only knows about access key ID and secret, and doesn’t have a place to even put a token.
> Man I really wonder why the lack of proper 2FA is so wide spread?
Because it's a giant PITA unless you have a dedicated team managing it. And the service companies get this and charge accordingly (aka enterprise levels).
It's why companies like 0Auth get bought for gigabucks.
After the Unifi Video fiasco, I bought a UDM Pro to test Unifi Protect.
Once I saw it required cloud login I got scared.
After I saw an ubiquiti ssh key preinstalled in a device with unfeteted internet access I shut it down to never bring it up again
There was no option to bypass cloud login when it got to my hands, apparently that has been "fixed" with some update, but if you buy a device and it comes with an outdated firmware, as it tends to be the case with their cameras and APs, your only choice is activate on cloud, setup, update, factory reset, setup on local.
About 2... I guess when you got access to all their source and infra is just a matter of pushing an update to enable ssh and they don't even need to even push a key. My problem with the keys is that they come bundled with it and you don't know it. There's no reason for them to install a key in there without your consent. Imagine Microsoft presetting an Administrator account on every Windows Server without telling anyone... It's just a security problem, even more in a firewall
> Maybe putting your network control plane in 'the cloud' isn't such a good idea after all...
Sure it isnt. It is extremely bad idea and actually something like the ubiquiti breach is not even strange to me, once you have worked once in "enterprise(tm)" world this doesnt seem like anything strange.
There is just no way to buy a router that communicates with 3rd party servers and to let it access the LAN is a complete no-go (even if I am paying ISP router as a part of the package it is running as bridge just to pass the connection to my router).
I consider router as a first line of defense for inbound traffic and last line of defense for outbound and there is just no way to trust some fishy corporation for this.
And if the corporation is actually promoting cloud access, like Ubiquiti or Google, they are pretty much banned from my shopping list for all times.
The breaches are common, the reporting/discovery of them is not. Security just isn’t a priority for a lot of Orgs, as the consequences are minimal (see: Equifax) due to a lack of regulatory or financial penalty pain when a breach occurs.
“Help yourself to a free year of identify theft insurance” and all that jazz.
This is correct. Worked for a fairly large corp with lots of customer data and while I haven't witnessed breaches of said data it's pretty much a matter of time.
Me and my colleagues always pushed for more secure setups and configs but the common rebuttal was "no need there's a keycloak running several layers above and you need to use a VPN and need access to AWS first, go implement features instead."
I hope for them that no rogue employee decides to play around a bit or that no one stores their credentials in some cloud LastPass account with a '123456qwerty' master password.
Yes, if they destroy all of their backups, all of their hardware and every one of their current AWS accounts. Then start entirely from scratch. Any measure falling short of that (and let's be reasonable, it definitely will) means that they're entirely untrustworthy from now on.
Of course having your home network controlled from the cloud should already have been entirely untrustworthy, so in practice it won't be an issue for their sales.
There is Fortinet(which acquired Meru 5 years ago). Meru was pretty OK. I helped manage a setup of 2500 + access points on a campus.
I left that job 6 months after Meru was acquired so I cant say how they are now.
Got 3 no brainer CVEs against them. We're an enterprise customer who is now moving away because after Fortinet acquired them support dropped off a cliff. They had some good people but it bacame rather apparent that there was a bit of a toxic culture there.
When you're operating such massive services, at minimum you should protect the admin accounts not just with 2FA, but also with IP firewall. Looks like both were missing from here ...
This isn't really true. If you have an AWS, you need a global god admin. That's the root user. As an IT guy, I have to store those creds somewhere. So I make the password super long and random, store it in lastpass, add 2fa, and add alerting for all logins. It's never used except in the super rare case we have to do something that requires the megagod level privs of the root account (like changing billing to a master account etc)
I worked at Ubiquiti while you were there. I can confirm that the company was going downhill fast.
The US offices were starting to feel empty because so many people were leaving the company. Only place I've ever worked where engineers would quit before they got another job.
Saddest part was all the wasted potential. There were good engineers making good products at Ubiquiti only a few years ago. Once UniFi exploded in popularity the CEO started trying to micromanage everything and it all started falling apart.
Greed. 100% greed. While I was there, the CEO loved to just fly between offices (randomly) on his private jet. You never knew where he'd pop up, and that put everybody on edge, because when he was unhappy he tended to fire people in large chunks (and shut down entire offices). Every decision was motivated by how it affected the stock price.
I'm just an outsider looking in based on a short paragraph, but that doesn't strike me as greed. How does firing entire batches of people help the stock price? Anyone with more business acumen than a cat will understand that it doesn't. "Oh, that office made a mistake? Let's fire the lot of them so they'll learn how to do better next time!"
Based on this, it seems more like an asshole with some attitude problems rather than greed per se.
It’s very easy to say “greed” because we want to believe bad things are always the fault of someone’s personal moral failings. Hopefully the tech community will start to realize that when the same problems keep occurring for the same reasons, it points to a systemic failure.
My apologies for the language, but throwing away the advantage and further potential of the USA, in the interest of personal wealth and quarterly profits, is even more disgusting.
The majority of America’s management culture is horribly broken.
On the plus(?) side this management culture sometimes allows for easy external disruption.
It's how you do a text replacement in VIM, I believe it's s for substitute, /../ for the regular expression, and g for global, to substitute multiple instances.
It's unfortunate what seems to have happened to Ubiquiti. The idea of decent network hardware with a good UI that can support the prosumer to small business segment of the market has a lot going for it.
In the early days, it seemed like Ubiquiti was going to nail it and was building up a strong, loyal following as a result. Then came all the reports of quality problems, promised features never delivered, phoning-home, ads in UIs, the not just security breaches but cover-ups...
How the brand hasn't become toxic already is a mystery to me, yet look at the stock price tracker. It's been trending up for years and it has well over doubled in the past six months alone. Apparently investors aren't too worried about any potential consequences of all these reported problems.
The early days at Ubiquiti were good. I worked with a lot of good engineers and we shipped good work. The decline is a recent problem.
> How the brand hasn't become toxic already is a mystery to me, yet look at the stock price tracker. It's been trending up for years and it has well over doubled in the past six months alone.
This is your answer. No incentive to change. All of the bad engineering decisions have been rewarded by increasing stock price and continued sales.
Most of the original engineers have quit by now. I lost track of how many UniFi engineering leads joined and then quit after it started falling apart. Before I quit, I heard rumors that the CEO was making two separate teams work on the Dream Machine project separately, competing against each other. That made more people quit. I think they were trying to reboot engineering in foreign countries when I left because it felt like we were forgotten in the US offices.
>This is your answer. No incentive to change. All of the bad engineering decisions have been rewarded by increasing stock price and continued sales.
It'll come around, it just takes waaaaaaaay longer than you'd think for a slump in engineering quality to be reflected in the market. Especially with hardware.
We have a few publicly traded clients that we've worked with for decades (and by "decades" I mean longer than I've been alive). It's cyclical that they want our engineering to build new products when they're doing bad in the market, and once our work is released and gets them some success they'll design transfer back inhouse as aggressively as possible (their engineers aren't all bad, it's just not an engineering culture there). By the time we're out, they're still riding the upswing. Their management's institutional memory either doesn't see the cycle and/or they don't care beyond the next few quarterly reports.
What I'm trying to say is I know hurts to see your baby languish but it catches up to them, eventually.
IMO, the CEO had a bit of a Steve Jobs hero-worship complex, but only all the bad parts. I can absolutely see him putting two teams on the same project, and "may the best product win".
The team that "lost" would get canned, obviously (I saw it happen to two separate offices while I was there).
> IMO, the CEO had a bit of a Steve Jobs hero-worship complex, but only all the bad parts.
Part of me wishes Steve Jobs had never been brought back to Apple and died in obscurity. He's such a bad example. People idolize him, but his good parts can't be imitated, his bad parts can, and a lot of people can't seem to tell the difference.
Intel tried this too, according to an ex-Intel employee here. It's a management strategy intended to get the best result by inspiring competition. The problems it invites are the obvious, but the tradeoff may be justified in some scenarios.
It's also the premise of David Mamet's famous play Glengarry Glen Ross.
Google certainly seems to do this when it comes to chat applications. Ironically though, they've actually (arguably) lost marketshare - they went from gtalk being pretty widely used (in the late 2000s, early 2010s, as Android took off), to having a confused and fragmented ecosystem (Allo, Duo, Hangouts, Chat, Messaging), and it seems none of those have the same market penetration as the original did.
Perhaps internal competition to that extent simply confuses customers?
They essentially destroyed all competition (AIM, YIM, ICQ, MSN etc), the open source solution that would standardize chat (XMPP) and themselves. Making people just go and use proprietary solution like WhatsUp.
There’s an infamous anecdote with Jobs doing this. Tharanos had the same “two teams” story.
A lot of CEOs who think they’re the next Steve Jobs, don’t understand their own tech, and presume the solution to their technical problems is a lack of “motivation”.
Creating a skilled skunk works team to handle a critical problem is a great idea. Making two? And putting them in conflict? It’s like throwing your a steak to your dogs to have them fight over dinner. Idiocy.
I can see why the idea is tempting, ie testing multiple strategies and survival of the fittest. But in reality there are extreme downsides. Teams will lie and fudge data to get ahead. People dont trust their coworkers.
I think this is where strong technical leadership is needed. At some point someone needs to make a decision on the technical direction and have the conviction to stick with it.
I imagine it comes from some flawed business belief in the survival of the fittest. I've never heard a tech person advocate for it, I only ever hear it from business types.
Of the things I've seen reportedly happening at Ubiquiti, that one makes more sense than some.
Businesses put projects out to tender all the time, and other businesses that can provide what is wanted invest sometimes very considerable resources into putting in a bid, knowing that if they don't make the winning bid then those resources will mostly likely be completely wasted. Evidently it is still worth operating a business on that basis because the benefits when you do win outweigh the costs of the failed bids, and those costs might include reducing morale in a team who worked on a failed bid.
If that is the case across industries as a whole then economically it might make sense for a business to operate on the same basis internally for their Next Big Thing. Run multiple independent teams at the start, give them all the same brief, then see which team comes up with the most promising starting point. I don't see much of an argument for continuing the internal competition beyond the concept to prototype stage, though, unless perhaps it turned out that more than one team could produce a product that was viable in its own right without competing for the same market.
What do you suggest for someone leaning on an EdgeRouter Lite (with EdgeOS v1.10.11, staying far away from v2.x) and a Unifi UAP-AC-PRO access point?
The router will probably reliably carry me until saturating 1Gbps becomes a daily occurrence and the access point will be retired when WiFi 6E comes around (assuming Ubiquiti's WiFi 6E access points aren't required to connect to the cloud.)
Also in answer to sibling comments - you don't need to connect the UI software to the cloud. I have an Edgerouter SFP-X and a few AP lites. I recently added an 8 port Unifi switch for more PoE ports.
Following is to the best of my knowledge! Any ex-Unifi folks or other pros are welcome to correct me:
- The Edgerouter absolutely does not talk to ui.com (except check-for-updates). There's no remote control ability etc etc.
- The Unifi range can be controlled from the cloud, but via your Unifi Cloud Key. You can run this software yourself, without buying extra hardware. When it is not running there is no comms to the cloud. Run the software, configure things, stop the software - I run it in docker on an rpi4.
I think the brand isn’t toxic because of the state of the competition.
Even with this hack, their stuff is still the best available for home use. Netgear or Linksys consumer routers are awful. The mesh devices are okay, but serve of a different market.
The other stuff people recommend is often 2-3x the Unifi price and 2-3x more complicated to setup and configure.
Any ex-employees want to start a company making this stuff that doesn’t suck?
The other stuff people recommend is often 2-3x the Unifi price and 2-3x more complicated to setup and configure.
I don't know about 2-3x the price, at least not here in the UK. We looked into this when fitting out a new office with the networking essentials a couple of years ago, and Ubiquiti wasn't particularly attractive on headline prices compared to the other typical brands that get mentioned in that space (MikroTik, DrayTek, etc.).
However, the ability for non-networking experts to set something up quickly that does the job and doesn't have glaring security problems is definitely a competitive advantage in that prosumer to small business market. None of those other brands has a great UI that I've seen and they all tend to assume that anyone who wants to set up a couple of extra APs for a small office WiFi and a standard firewall for the Internet connection will be a pro-level network expert.
I think it would help a lot of people if better products/companies started to compete seriously on that front, and I have to think that with the SME market to fight for there is room to compete with the established names. After all, that is largely how Ubiquiti themselves broke into the market, or at least that's the perception I had at the time.
Who is "we"? You're talking about brands aimed at enterprise customers. I have no idea how much penetration Ubiquiti has managed to make into that market, but certainly around these parts its products are better known in the tier below that. The kind of organisation that is considering Ubiquiti IME probably wants significantly more functionality and scalability than home or entry-level small office gear but isn't working at enterprise scale and doesn't want to pay for it either. That organisation is unlikely to be considering the kinds of brands you mentioned as alternatives, and I rarely see any of those brands mentioned in discussions about alternatives to Ubiquiti.
I kept thinking that all the laments about Ubiquiti and others are enterprise-level stuff and are sysadmins' headaches, so was thankful I don't need to worry about it. But more and more I wonder how I managed to choose an Asus 5 GHz router by reviews, bought it secondhand, and now have it chugging along for something like eight years with only some hiccups in summers from heat. With no ‘cloud’ shenanigans.
Also, there are DD-WRT, OpenWRT and such. How comes people don't use those instead of whatever broken software the manufacturer bestows on them?
Reminds me a little bit of Adverse Event Reporting in pharma. If a drug manufacturer finds out about an adverse event (i.e. a bad reaction) to a drug, it kicks off all sorts of obligations that have the potential to be time-consuming and expensive. So pharma is the one sector you won't see with a "social media listening/analysis" department in marketing. They actively avoid tracking or learning about discussion of their products on social media.
Sounds like a case of poor incentives. It's easy to wag our fingers and say "well they shouldn't be doing that" but difficult to come up with a system of incentives that makes everyone want to do what's socially beneficial. In this case, it seems like there should be a separate organization in charge of looking for adverse events that is rewarded for finding events (instead of punished). We use some strategies like this currently when regulating the finance industry
I worked for a pharma co for a while, they did have a social media listening department in marketing, also we were trained to report any discussion of the company at all to a special investigations unit that would follow up.
As someone who works in pharma currently, I have seen the same. The pharmacovigilance unit does search the internet/social media for AE's, off-label use, etc (depending on region). Secondly every single person in the company also needs to report events when they see/hear/read them. So not having that social-media department wouldn't be doing much, not all thousands of employees can/will/want to avoid social media.
Thanks. I can well believe my experience (ca. 2014) is a little outdated. I would imagine they is still quite difficult to sell social listening into as a sector, but it makes sense that eventually you have to take your head out of the sand.
"We believe that the hackers obtained read-write access to our database, but we also believe that they were too polite to actually use it for anything."
Ubiquiti's response is not surprising. Of course they would lie and deflect about the severity of the attack. They have terrible customer support and awful software update communications; besides, they are hostile to analysts and the press.
Either Ubiquiti made false material statements, or the company is negligent. In both cases, it will get them into hot water.
In Ubiquiti's defense, I once brought a disclosure to their attention on Twitter a few years back and they very swiftly issued an update. I guess things have gone downhill since then. It boggles the mind why a company whose core business is catering to the self-hosting crowd, would try to force self-hosters onto its cloud plantation, when it can't even protect its own house.
Better solution: never store unencrypted PII/PCI/PHI/etc. in the database. There are loads of tokenization solutions (Very Good Security got a bunch of buzz a couple years back) that do this, or alternatively all of the big cloud providers have key services (KMS on AWS and Google, Key Vault on Azure) so that you can ensure that every decryption attempt is tracked and logged.
If you need to search on some of this data you should use blind indexes (Google blind index for more info).
Under GDPR, a failure to know about (detect) a breach (and then report it yourself) is in itself a violation. Likewise, failing to have suitable organisational and technical measures in place to protect the data is a breach.
I'd certainly argue your inability to account for processing operations after having been breached through lacking knowledge of what was done due to a lack of logs was therefore a breach.
Can you provide more information regarding a system that can log these types of breaches (and all other activity, as required) and that would be deemed "safe" and reliable post-breach? i.e.: A system that can provide logging and that can *assert* that all logs, even in the event of a breach, are asserted CIA?
AWS offers object locking, which is similar to a WORM drive (Write Once Read Many). This prevents logs from being deleted. The other approach is to ship logs to another AWS account.
Thanks. I was a bit puzzled earlier why AWS was so insistent about enabling object locking, my specific use case doesn't profit from remote versioning at all. But I can see how this would mitigate log integrity concerns. I'll definitely enable it for that.
You are required to have internet access to setup something like the UDM-Pro. After it is setup you can create a local admin account and disable remote access.
Here is how:
1. Login with your online account credentials and password
2. Choose system settings
3. Choose advanced
4. Disable Remote Access
5. Confirm that "Transfer owner" won't be available if you disable remote access.
The issue in general is that the UniFi stuff can be crappy and buggy, but it SUCKS LESS then any other complete solution for a home / small enterprise there at the price point.
I personally used to given them a strong recommendation and even now that is a recommendation with some footnotes. They have been growing to fast and the SW quality has gone down. Being on the latest release is not always the best idea.
To be fair in my I have had many conversation with Cisco that started with "no, not the latest GA, but what is the latest proven STABLE GA."
Just verifying my understanding: this will make it impossible to reach the device from ui.com or otherwise off-network, but an attacker could:
1. use leaked SSO keys to forge an SSO token
2. craft a malicious webpage
3. get an unsuspecting UDMP user (e.g., me) to navigate to that page
4. run scripts on that page that would access & interact with the UDMP from the browser within the network, using the forged SSO
Is this still a possible vector? Presumably UI would have rotated their SSO keys by now, but since there's no way to disable SSO-based login to the UDMP....
Hmm, I followed your steps and my ui.com account can still log into the device.
I have also created a local account, that I can use to log in alongside my ui.com one, but I cannot disable my ui.com SSO from being able to sign into the device.
Let's make sure we are talking about the same thing.
You have local and SSO account.
You disable remote access in your local cloud key.
You open the local IP for the CK and are able to sign in using the SSO account is what you are saying, so auth token is coming from remote.
Question if I got this correct, can you go to the ui.com portal, the UI cloud based one in a web browser do you see the controller still? Can you login and still manage it through the remote web portal? This is what turning off remote access does. You should not be able to manage the system remotely.
Disabling remote access is for the remote web base ui site portal and that should not work after you disable remote access (my understanding). It is possible that you can connect to the local controller and use SSO to authorize vs web and be passed a valid token to login however that would be local only and not remote. Ie the hacker would have to have your SSO AND be on your local network.
Have you tired / are you able to delete the SSO account in the local CK? I have not tried but will later.
This company is a disaster it seems, and I have just setup my whole home infrastructure and home security aound their products...
They where the most recommended brand when I was shopping for new stuff a year ago.
I picked up an EdgeRouter and none of the cloudkey/unifi stuff. I initially felt like maybe I should have picked the unifi gear and maybe a dumb switch, but now I don’t regret the EdgeRouter. Couldn’t be happier with it.
I don’t trust anything that tries to solve the “firewall problem” by setting up a cloud service for what should be a local appliance.
I always thought that the main selling point of their devices was that you can run your own Ubiquiti server at home and keep everything local? They are always portrayed as the not-so-shitty IoT company.
If you don't have remote access enabled and aren't running their surveillance camera software, it is not clear to me that there is any risk to the customer from this event (outside of the source code being used to generate new exploits). It doesn't sound like the attackers were able to abuse automated firmware update functions, and losing credentials to a UI account has no impact on users running cloud key locally without remote access enabled.
Right. I would never have any device like a camera be directly connected to the internet and instead cut off that device from the internet in my router software and only access it from outside via a VPN.
Not that this whole screw-up should be excused in any way or downplayed.
I bought one of their security cameras to act as a nursery cam last year, which I could later convert into a home security camera.
The 'in house' software, unifi-video, was discontinued 3 months after I got it set up. All of the apps I use to connect to the system have been pulled from the app store, and you now have to use their camera controller for the one camera, vs the software Im running on my linux box.
Their controller is much more limited, and many, many security camera installers were caught off guard with no path forward for their customers. It's a nightmare of a shitshow and I would never in a million years recommend Ubiquiti as a company at this point.
I now use the camera in direct rtsp mode. This way it can be used by any rtsp tool including video recording and the lot. For the nursery camera I just use IPCams on iOS on an iPad.
Yep, I also use their cameras as baby monitors. RTSP mode to VLC on an old chromebook as an always-on monitor.
The Protect app works pretty well now assuming you have a controller to connect to, but the time between the Video app shutting down and Protect actually working properly was very frustrating. I would never trust the Protect app to stay connected while I'm asleep, though. It's definitely not stable enough for that.
The very first night I got the camera set up was the night that there was a level 3 outage and major internet snafu, making it so that I couldnt actually get into the app to view the camera. RTSP mode sounds pretty good at this point with only one camera.
(Ignoring the fact that Ubiquity marketed these cameras as having a speaker, when, in fact, you cannot send audio to the camera, only that it makes noise on its own)
I guess the concern here is if your VPN was provided by Ubiquiti then you might have an issue.
My approach has been an isolated (read basically no internet) LAN, bridged by a small PC running hardened and locked down Linux. There's no egress from the LAN. VPN access to this LAN goes via the PC under my control, which itself has access to the wider internet via its second interface.
This approach is nice as I don't have to trust any router vendor or proprietary software vendor to be competent, by relying on their equipment to control internet access for devices. Although I recognise this is probably inconvenient for users, none of this is really too impractical - a bit of adverse publicity for cloud and "internet connected", and I could see properly firewalled, egress blocked networks taking off...
(I am more concerned about egress than ingress, because it's the biggest gap most people forget about, and most people just rely on NAT to stop ingress, forgetting any device can phone home anywhere, and they're not monitoring... I don't even allow DNS on that network. IoT that can't handle this just doesn't get in the door)
I can't speak to the newer UniFi garbage, but the selling point for their Edge network products was that you could have Cisco-ish managed switches and routers without paying the absurd prices for ASICs, licenses, ios upgrades, parasitic middleman distributors, etc.
Just finished setting up my Ubiquiti-based home network that includes a dream machine, 6 access-points, and a wireless bridge to an outbuilding. All told about a $1,500 investment I made because I thought I was investing in "best-in-class" hardware and software.
I've done the same, with the only difference being that I bought the stuff a few years back. I never enabled cloud management nor remote access though so I think I'm OK for now.
Not buying any more hardware from them though, unless things significantly change.
I almost did the same thing, but it was clear a year ago that they were moving towards "cloud based" services, something I didn't want to participate in. Looks like it was a good decision, in retrospect.
Ended up with some used Cisco equipment aimed at the small business segment. Similar-ish price to new Ubiquiti gear, and I've spent essentially 0 time maintaining the stuff beyond initial setup. Still don't have APs set up though, I've just been making do with what I had laying around.
We should be clear here that there are multiple types of "self-hosted". Ubiquiti makes essentially little (weaker) Raspberry Pi devices with PoE that are dedicated to just the controller, and a few years back they also forced their (garbage) "Protect" onto their hardware only. They (confusingly) call these "Cloud Keys", though they have nothing to do with the cloud. However, you can also get 100% standalone versions of the Controller that will run on any server or VM you've got, Linux, Windows, or Mac. This is just the Java 8-based controller software and that's it, and you can lock those down arbitrarily hard for any WAN access same as any other LAN network software, no general internet access is needed at all and no firmware is involved.
A lot of people quite reasonably got CKs seeing them as very easy ways to have a low power always on local controller since they didn't have some other server running 24/7 already. If the firmware on those was updated to require tie-in to Ubiquiti's SSO that's a horrible betrayal. But I'm confident in saying the full standalone Controller doesn't since I have mine locked down from any general net access, remote L3 management was done to IP only at the firewall and I've been switching to just putting it all through WireGuard.
I have a few Ubiquiti devices I haven't updated in months, that don't use any cloud accounts, and I used to run their controller software in a container that I only started when I needed to administer something. But now I guess I'm never updating and will be looking to get rid of all their equipment.
What an incredibly consumer hostile and incompetent company. Shame, because the hardware pretty much works reliably.
Im a bit confused by this. I run a UniFi Controller in a docker container, have a few APs and a router, and everything works fine. No cloud stuff going on here.
Am i just lucky or something that i havent been forced to the cloud yet, or is it something i am missing here?
I have a cloud key with no cloud access. It's just that cloud access is the user directed workflow for sure. Setup without cloud access was not clear at all [1].
[1]: I don't even remember the steps, to be honest!
Hmm, even the self-hosted SW can use SSO from cloud... so I'm now worried that our equipment is still vulnerable by whatever system allows cloud logins.
It’s increasingly hard to find providers that don’t though. The advantages to global management software is pretty high & the easiest way to implement that is the cloud.
Wasn't really a "cloud" hack so much as a hack of a root user. How they accessed that root user's credentials is not detailed. Phishing? Hardware hack? Dumb root user and it was possible to guess his/her credentials? Could even be, that particular root user was in on it with them for all we know?
In any case, this sort of a hack of any other company's root users would result in the same spectacularly catastrophic pwnage. That your root users have root access on your own machines won't help you.
What they need is to structure their security properly. I'm not sure why this user needed root access to everything globally for instance? That seems wrong to me at first blush, but it could be a matter of me not understanding their business model.
The reason people are bringing up cloud is because it's what effects them. If you have (cloud) access through a company to local devices and that company is hacked then that could be a very wide pathway into your local set up. The company being hacked and related implications is still not great for a huge list of reasons but it's the possible local breaches that are more of a worry for a lot of us.
Ubiquiti has recently been pushing there cloud set up (to the point that you can't set up a local controller with out setting up a cloud account) that's why it's so annoying.
*There is probably a way but the last time I tried I couldn't find it in setup and so installed using a previous version.
Our "CTO" was told only last week by someone from the company that helps us with ISO 27001 that we shouldn't use whatever we've got, but get Ubiquity instead, because it was safer...
> Adam says the attacker(s) had access to privileged credentials that were previously stored in the LastPass account of a Ubiquiti IT employee, and gained root administrator access to all Ubiquiti AWS accounts, including all S3 data buckets, all application logs, all databases, all user database credentials, and secrets required to forge single sign-on (SSO) cookies.
A root user user breach, seemingly on the organization main account. Ouch.
I wonder if MFA was set up, with the TOTP creds also kept in LastPass.
This boggles me when I see this option in any password manager (and I think every single one has this 'option').
Why do password managers let people store TOTP next to the password, this completely invalidates the 2FA of TOTP if your password manager get broken into.
> this completely invalidates the 2FA of TOTP if your password manager get broken into
I think that's the big "if". If you assume the password manager is secure (which something clearly wasn't in this case, but that seems like an outlier), TOTP secret in the password manager still secures the account.
Is such a setup as protective as a separate storage method? No, but it's leagues more convenient. A cloud-based PW manager also solves the problem of a lost/broken/new phone causing you to lose all of your 2FA setups. Some 2FA apps do as well (Authy, iirc), but trust me when I say people lose 2FA codes _all the time_. And then 2FA needs to be disabled by support, which is its own can of worms.
The best security measures are the ones people actually use. If not having to use a separate app is the convenience people need, then I think it's totally worth it.
I mean, if the password manager’s store is compromised, then sure, okay. But if only the application password is compromised then it’s still 2FA since the attacker cannot authenticate with just the password.
The F in 2FA is factor. Satisfying one login request from one factor (password vault) is 1FA. This is why the second factor is normally something that isn't your password vault (historically your head, now a piece of software): a hardware key, a recovery code, etc.
A slightly more generous interpretation is 1.49A (rounds down), because someone with a reused username/password combination. But if you're using a vault with a sophisticated factor, the venn diagram of "people who have your password," and "people who also have your master password," are pretty tight, except for cases where the provide has been breached (all bets are off).
Don't dispose of the second factor for convenience.
And the A in 2FA is authentication, not storage. The password vault is not a factor because it is not what is provided for authentication, the individual password is the factor. The fact that the vault being compromised reveals both factors does not make it no longer 2FA.
Colocating the storage factors definitely makes certain attack vectors possible that aren’t otherwise possible, but it’s still 2FA. Are hardware keys best? Likely, but still many probably have their password vault and TOTP application and storage on the same device (e.g. both Bitwarden and Authy on their mobile device) which is a middle-ground convenience vs. security between TOTP in the password vault and hardware keys—but I doubt many would say that it’s not 2FA.
Because I already use MFA to access my password manager in the first place, and don't want to deal with managing backups for each flavor of MFA app that is pushed on me.
How do you manage MFA for encryption-at-rest? None of the common TOTP systems do this. LastPass and 1Pass have built-in "local encryption keys", but they're stored in the same place as the store and only protected by your password. I think theoretically you could set this up with Keepass using a Composite Master Key (combining a password-protected key and a certificate-protected key, storing the certificate separately, ideally in an HKM), but I don't know anyone who does this.
Or just keep them somewhere that isn’t directly beside the password?
I have my password in a password database, and my TOTP tokens on my phone and a Yubikey.
I have a second “break glass in case of emergency” password database that contains TOTP secrets for all my most essential accounts and a backup of the key loaded on my Yubikey.
The root account credentials should be used to create a privileged IAM user and then physically locked away in a box after setting up a hardware MFA device (plus a backup MFA) for the root account: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practi...
The privileged IAM user should then be used to administer other IAM users and roles. All IAM users should be required to have hardware security keys like Yubikey.
I have accounts for personal use and what I did was set up TOTP for the root account(s) and a U2F (YubiKey) device for the admin account(s). I use 2 YubiKeys; one primary, one spare. The YubiKey has limited TOTP space, but they're perfect for those types of high value accounts. You store the TOTP on both, so if you lose one you can use the root account to fix the admin account.
> Is something like kidnapping in the threat model for companies like ubiquiti?
I doubt it. That's going to raise some blinking red flags on the radar of organizations you don't want to be on the radar of. Not just three-letter federal organizations, but three-letter news organizations too. The current situation is Yet Another Security Breach that will be forgotten about in 15 minutes. But a kidnapping is interesting! People will be making documentaries and shit about that.
It's so much easier and cheaper to bribe people than it is to kidnap them.
Those kinds of fanciful things are not commonly in threat models because they don't happen. The threat models address things that are likely to happen, which are all variations of someone's device getting compromised.
Printing out the AWS root password and putting in a safe is almost useless. Root password can be easily reset without MFA by having access to the email associated to the root AWS account.
MFA is the important one to keep it safe for AWS root accounts, set for the master AWS account and lock root access for all member accounts via SCPs.
Use Organizations. If you’re creating new standalone independent accounts for teams you’re just seeking yourself up for some kind of billing/security/governance catastrophe down the road.
I was referring to the root accounts in your organization. The blast radius is more limited, but still a root account that has access to everything within that AWS account.
AWS root user accounts are kind of an achillis heel in every enterprise setup using AWS. What you typically do is MFA (bare minimum) + sharded secrets. This means you need multiple people to use the root user account. You can also hook in additional audit controls eg by automating cloud watch and sending notifications about any root user login. Alternative is that you throw away the password and vow to never use it, or set up an account recovery process (all of this may not be a great idea as it can fail when you need it most).
The situation is somewhat more relaxed with GCP Billing Accounts and Azure EA Accounts, though they have better separation of concerns than AWS (billing vs. workload access). Nonetheless, never give these passwords to finance department lest they store it in an excel sheet on a SharePoint. Access to these credentials allows anyone to suspend billing for an entire enterprise... not sure what controls the providers have in place to verify any of this before initiating automated shutdown of all workloads.
By the way, reporting to krebsonsecurity is a giant waste of potential income. This is what the SEC whistleblower program is for. You get paid for submissions there that lead to successful enforcement actions, and the payouts can be very substantial. Furthermore because payouts exist, there's an industry of competent lawyers that will happily take cases with compensation coming exclusively from your payout.
Also, how is this a securities case? The company did not disclose the scale of the breach to shareholders.
The point is that if you deliver useful securities case to the SEC you can get paid. But it must be something the authorities don't already know. And yes, when the truth comes out about something like there will be a securities case.
The description of the incident in their quarterly financial statement seems to match this description. It doesn't downplay it quite as much as the email they sent customers.
> For example, in January 2021, we became aware that certain of our information technology systems hosted by a third party cloud provider were improperly accessed and certain of our source code and the credentials used to access the information technology systems themselves had been compromised. We received a threat to publicly release these materials unless we made a payment, which we have not done. As a result, it is possible that the source code and other information could be publicly disclosed or made available to our competitors. Due to the nature of the source code and the other information that we believe was improperly accessed, we at this time do not believe that any public disclosure will have a material adverse effect on our business or operations, but it is impossible to gauge the precise impact of any such disclosure. We have taken, and will continue to take, steps to remediate access controls to our information technology systems.
> Adam wrote in his letter. “Legal overrode the repeated requests to force rotation of all customer credentials, and to revert any device access permission changes within the relevant period.”
No. They don't care if customers get pwnd. They care if customers become aware of exactly how they got pwnd and launch a class action. It's shitty but entirely predictable behavior common in these situations.
“force rotation of all customer credentials” = make customers change their passwords, which is a huge red flag that would draw attention to why they were forcing that.
Github just recently logged out all users because they had a bug that could leak other account data into sessions. They were very transparent about why they did that, what happened, and I for one trust them more for it.
So hackers breached the network and still might have been present. Having everyone reset their passwords at that time is the LAST thing you want to do, as the hackers could have just collected all the fresh credentials, a significant percentage of which are also used for other services because users are users.
Legal made the right decision. You clean up the internals, close the backdoors, and then you notify/refresh user credentials.
The plot Thickens:
"SHAREHOLDER ALERT: Ubiquiti, Inc. Investigated for Possible Securities Laws Violations by Block & Leviton LLP; Investors Should Contact the Firm"
This type of solicitation is a dime a dozen, but I do find the name of the firm hilarious. Anyone who's had to make patch cables would recognize the name...
It is interesting to do a search of HN for past references to "Ubiquiti". Whenever the topic of routers came up, many comments followed that recommended them above any alternatives. Commenters seemed proud to tell the world they were using Ubiquiti, as if the "HN concensus" for home routers was to choose Ubiquiti.
It seemed to me Ubiquiti would never allow customers the option to install their own OS (e.g., BSD) or boot from external media containing a non-Ubiquiti OS, without sacrificing the benefits of hardware specs that were likely deciding factors in selecting the Ubiquiti hardware above existing alternatives. The intent was clearly to have Ubiquiti retain control over the hardware after purchase. The customer effectively remained tied to Ubiquiti forever, so if the company started serving ads, using AWS unnecessarily, etc., there's no way to opt out. Customer is compelled to accept all updates.
Specs are important, but maybe not as important as control.
Reliance on third parties necessarily increases potential risk. Unnecessary use of third parties is, IMO, poor decision-making. This is of course rampant in "tech" and, IMO, marks a triumph of the salesforce for those third parties over common sense, possibly assisted by network effects. Further, I dislike products where there is a heavy focus on opaque "updates". Again, many customers have been trained to believe that not updating is always the wrong decision. (Meanwhile they have no idea what is in each update.)
As stated in one of the blog post comments:
"It is even worse: Ubiquiti forced all users to use cloud-based authentification even for accessing your controller software on a local network with a local client. This was not even properly communicated but deployed by one of the regular maintenance updates."
Ubiquiti sells turn key HW and there never was any hint that this was HW you could roll you own on.
I could buy APs that I could install OpenWRT. I could setup an OpenBSD firewall. I could run my own DNS. I have done all this in the past. The point is I do not want to anymore. I have better things to do with my time. So as a turn key solution that is "prosumer" their kit works and I think you will find that is why most people here have recommend it.
You can disable the Cloud connection and I posted how in this thread. People on HN are tech savvy enough I sort that part.
The fact of the matter is they had a bad security breach and they have a cloud connected platform. Ops. That sucks. But the reality is that market forces have pretty much tied evaluations to cloud connections and telemetry gathered from it. That is the part that REALLY sucks. I do not blame them for trying to make money. I am angry if they were less then truthful in the details of the breach and I am sure both the SEC and the court of public option with punish them.
For my part, I have no plans to replace the 4 switches in my house with boxes running SONiC nor the 4 APs with OpenWRT or my firewall with OpenBSD because I just really do not care to have to maintain it, and if I drop dead tomorrow my wife can likely sort the UniFi stuff (as I have documentation on the setup) but there is no way could she sort the roll you own.
> It seemed to me Ubiquiti would never allow customers the option to install their own OS
I run plain-vanilla Debian on all my Ubiquiti boxes, six or seven of them at this point.
debootstrap --arch=mips
Octeons are awesome. Ubiquiti hardware is the bomb. I hear their software is junk, but I wouldn't know anything about that, I always erase it right after unboxing the device.
I'd like to hear more about your setup, because I'm tempted to try something similar. How do you actually bootstrap it? How do you configure it? Just a bunch of iptables rules? How do you configure the WiFi? What packages do you install?
for details. Debootstrap is the tool that generates a "minimum bootable rootfs". You can use any existing debian install (even a non-mips architecture) to do the debootstrap.
You will need to build your own kernel. Check the OpenWRT project for patches, although only a very very few Ubiquiti devices (USG-3 for example) need kernel patches. For other devices (EdgeRouter-4) the OpenWRT packages make things nicer, like getting the network device names to match what's printed on the front of the case.
Put the kernel and rootfs on a USB stick, plug it into the router, attach the serial console (nice easy RJ45 jack on the front!) and boot. Once it's up you can migrate stuff to the internal soldered-down emmc.
"It is even worse: Ubiquiti forced all users to use cloud-based authentification even for accessing your controller software on a local network with a local client. This was not even properly communicated but deployed by one of the regular maintenance updates."
Uh? that is demonstrably not true. Any more details?
Cloud managed anything has a giant red target painted on it. Especially infrastructure equipment. I'm still surprised anyone think's it's ok to use their ISP provided router and wifi, let alone having it be managed remotely by the manufacturer.
The problem is that on-prem isn't much better in many cases. Only the largest organizations have the capability to operate deep defenses against these threats whether it's the cloud, or the on-prem.
If you and your team have the skills you can operate fairly effectively on a small scale, but that's a pretty luxurious situation. Most home users can't tell the difference between a router and cable modem hence it's in the interest of cable providers to lower support costs by providing a managed offering. It's terrible from a security perspective, but customers have signed that away.
The common theme running through these breaches is that the organization isn't necessarily small, but they aren't Google/Apple/Microsoft-size either. Those companies have multiple layers of expertise and the cash flow to hold up development of anything in order to make sure things are secure. It's hard to wing stuff once the bureaucracy understands security is needed. They even start pushing their product security initiatives outside of product development to mundane departments because they get attacked by very smart actors. You can see from the news it's still far from perfect.
Once you get to companies the size of Ubiquiti, you start having challenges with implementing close to the same degree of security because you don't have float in the system to allow for additional costs, delays, etc. on top of the lack of expertise. Apparently Ubiquiti have been hemorrhaging expertise in other areas due to opportunistic cost-cutting, so it isn't a surprise that they suffer and respond in this way given that culture. A bad security decision by one exec in companies of this size can cut across many departments which doesn't happen in the behemoths.
>The problem is that on-prem isn't much better in many cases. Only the largest organizations have the capability to operate deep defenses against these threats whether it's the cloud, or the on-prem.
One of the truly sad things about all this though is precisely that UniFi made this a lot easier for small orgs and even individuals (and could have gone even farther). Stuff like VLANs and RADIUS became dramatically more accessible "for free", using just what was built-in to a UniFi stack someone might get anyway. Back when they were still more competent Ubiquiti added management VLAN support across the lineup, and the setup is fairly intuitive and then just works. At one point I'd hoped they'd continue in that direction much more. It's not some impossible thing, it mainly just needs better UX putting the pieces together in a graspable way. Graphical VLAN topologies and point-and-click, automating all the certificate authentication/signing stuff, the generation of profiles for onboarding, all the components for this stuff exist right now just not, well, unified.
I think a lot of places don't want to in fact, because they'd rather push cloud ties since that can yield subscription revenue.
I didn't make the claim that there is a problem with Ubiquiti using AWS. The problem is that the conditions exist for Ubiquiti to fail with cloud authentication.
If hadn't failed with that it'd have failed in another way. Perhaps that failing wouldn't have been as bad in other cases, but we already see how their products have declined for the same reasons.
Without cloud authentication the only way to mass compromise Ubiquiti devices would be to compromise software updates. Which companies do a better job securing usually.
on-prem is much better in most cases because if there is a bug an attacker would have to scan the internet and find you before a patch is released and you update. If that bug is only accessible from inside of your network to begin with, then that means the attacker would already have to be inside your network.
As far as the team having skills, there is not much that ubiquity does that can't be handled on prem, I mean you're already installing physical devices, how much more effort is it to install a controller? Sure, that means you're on the hook for upgrades, but in most cases you're better off not getting them instantly anyway.
And to clarify my point about ISP gear, I agree that the average user can't be expected to understand or care. I meant so called technical users.
> Ubiquiti’s stock price has grown remarkably since the company’s breach disclosure Jan. 16. After a brief dip following the news, Ubiquiti’s shares have surged from $243 on Jan. 13 to $370 as of today. By market close Tuesday, UI had slipped to $349.
Aaannd this is why we can't have nice things. Like trust in our vendors. Or security. Or consequences.
> the attacker(s) had access to privileged credentials that were previously stored in the LastPass account of a Ubiquiti IT employee
The interesting part of this story is how the employee's LastPass got popped. My guess is their local workstation was compromised, and their LastPass was either not logged out in a browser plugin, or they didn't have 2 factor auth required for each login and a keylogger got the password. In either case, it's a good reminder to be paranoid about your password manager, make sure it's got a logout timer, and use 2 factor auth.
I also don't let my cloud password managers touch a mobile device. It's fairly inconvenient, so I hesitate to recommend this to others. But I don't trust mobile devices very much. Anyone have thoughts on this?
> My guess is their local workstation was compromised
Honestly I don't think it was even that complicated, considering when I needed to spend money on some SaaS product the "chief accountant" (because there was no CFO) straight up sent me a photo of the corporate credit card and said "delete that when you're done".
Sure, but to be fair, credit cards really aren't that dangerous of a credential to wave around. You can cancel your card at anytime, and even dispute the charges. Its like instant key rotation, with a way to also roll back time.
> My guess is their local workstation was compromised
You mean someone was physically at the laptop/desktop and could access the OS and apps? Maybe if the employee was working remote (covid?) from, say, a cafe and left the laptop unattended when refilling coffee?
Or something else? ... Hmm, could also have been eg a browser zero day that gave someone remote access to the computer? Or a dev tools supply chain attack?
Or someone watched over their shoulder. 1Password makes it all too easy to accidentally reveal your password within the app. Someone with a video camera just needs one clear frame - 1/60th of a second - with a good enough view.
Should have blown the whistle to the SEC instead. SEC whistleblowers get paid. Up to 30% of eventual penalties paid by the company with no upper limit. Lying about a breach could be securities fraud.
This might just be a law-firm fishing for people willing to be plaintiffs when they sue. So, this in itself might not mean much of anything. This might just be a lawyer who read the news and though "Hey, let's see if we can find enough people willing to sue!"
Well this absolutely sucks :(. I've been a huge supporter of Ubiquiti ever since I was buying mini their PCI cards and sticking them into soekris engineering boards (ubiquiti started out as a hardware company).
The magic thing that absolutely sold me on their equipment was the ease with with you could provision and mesh new gear. Does anybody have anything that compares with that ease of use?
To explain what I mean: I recently had a buddy move into our guest house/apartment. While we waited for the ISP to come out and hook up his internet, I just put an AP on his counter, powered it up, and meshed it into our home network. The whole process took less than a minute and didn't require any running of ethernet.
(Maybe that's a common feature nowadays and I've just been out of the industry for so long?)
Ha I get it. The way I look at it is, I have chosen my security sin and that's Google. I turn off ad settings, pay for GSuite, YT premium and Google one, have ad block/ad guard everywhere and buy their nest home products.
Smaller threat area, much larger utility plus they by default have more resources than any other company to have better security.
Don't get me wrong, I was looking forward to moving to ubiquity but that's not happening anymore unfortunately.
As far as I'm aware Google has not had this magnitude of hack recently.
There was just a thread[1] yesterday about them starting to serve ads in their UI. It seems this company is rapidly losing credibility.
I have had plans kicking around for a bit over a year to do a full build out using their products, and just within that time it seems like they've gone from a glowing reputation to severely tarnished. Unfortunate, as it seems like they once had great products.
It really doesn't get worse than this. But isn't Ubiquiti more of a prosumer company, like MikroTik? MikroTik does get a lot of heat when they have a security vulnerability and get downranked for it as if it were far, far away from Ubiquiti's security profile (something like "US vs. some east EU country"), but this event tells a lot about Ubiquiti's upper management and their internal security practices.
Have MikroTik had any security vulnerabilities anywhere close to what has now been revealed about Ubiquiti? MikroTik's firmware seems very solid and I get the impression that they care about security and routines.
It seems the issue with Ubiquiti here has potential wider implied for users of the equipment (signing keys compromised, cloud dependency giving remote management plane access).
An individual vulnerability in a device is an issue but it gets patched. Hopefully it can't be exploited remotely. My biggest annoyance is when "infrastructure" ends up with outside connections in place (to the cloud or elsewhere), that breaks this model down (trusting the provider to mediate remote access, for example).
They're a big single point of failure, and this incident really proves that.
Fun fact - a lot of Ubiquiti's engineering is located in that same "east EU country". In fact, if you look at the open positions - https://careers.ui.com/positions - it appears most of the development appears to happen in Central/Eastern/Northern Europe.
Yikes. I have a (Ubiquiti) EdgeRouter X that I previously used for a fiber setup (and it's shelved now because it doesn't like this ISP's modem), had planned to get a ER-4 later down the road. Been on the fence for any of their APs for months upon months, now I'm glad I bought neither.
Technically EdgeRouter gear is unaffected as it's very cloud-optional, but I can't bring myself to trust any firmware from them at this point. It supports OpenWRT so I guess I'll install it and go back to OpenWRT.
I see this thread already has people discussing alternatives, so I won't ask for ones -- just had to put it out there that if you own an EdgeRouter, chances are that OpenWRT has a build for it.
Why do people trust any IoT devices these days? Shouldn't we be trying to reduce our exposure to (inevitably insecure) software? What benefits does it provide that are worth the unbounded risks?
It’s not _that_ unbounded? At least not yet! Until a tech savvy neighbor who’s also a creep can easily break into your network and home camera I’m not personally worried.
Why does it have to be a neighbor? It says "internet" on the tin. Do you have confidence that random people on the internet can't do the equivalent of a port-scan on you?
The other way I think of it is, I don't use it right now. It likely has open doors, intentional or unintentional. If the open doors are widely discovered, reliably closing them seems difficult. The highest-leverage point in time to influence this story is before I start using it. "The only winning move is not to play."
The question is what incentive a random person in the internet has into finding and targeting me. I’m a single dude who’s not rich, and I’m not gullible to scams (at least not easily). So unless they have a personal grudge against me, I would probably not be currently worried about installing a doorbell camera for example. The threat modeling will Change the moment I have a family of course.
I see it no different from driving a car. You can get carjacked, you can get in a crash, you don’t just not drive a car because of it, you just calculate your risk tolerance and do it.
Imagine someone taking control of your door and telling you you need to pay them $50 at a random bitcoin address before you can open it.
$50 isn't a reasonable payoff for most carjackings, but this isn't like a carjacking. They're doing the same thing at the same time to 1000 people using a script they wrote. That changes the payoff, and that means more people are likely to try to do something like this.
This is an extremely mild scenario. It's possible I'm wrong about IoT, and there's a case for using it in its current state. But one thing I'm _sure_ of is that analogies with cars don't work.
been doing it for years. meet the new boss, same as the old boss.
this is the other side of the coin of "you don't need privacy if you have nothing to hide", and it's exactly as stupid in application here as it ever is.
"Adam says the attacker(s) had access to privileged credentials that were previously stored in the LastPass account of a Ubiquiti IT employee, and gained root administrator access to all Ubiquiti AWS accounts, including all S3 data buckets, all application logs, all databases, all user database credentials, and secrets required to forge single sign-on (SSO) cookies."
Holy...
Wow. That is catastrophic. Everything is compromised. That's a complete rebuild.
> Ubiquiti’s shares have surged from $243 on Jan. 13 to $370 as of today.
How are we ever going to solve security as an industry against this? Again we're told that security isn't important. Being the first to market and insecure is the winning play and that's just fucked.
I don't think that it is a solvable problem if the economics stay the same.
SolarWinds is actually trading almost $2/share more than it did 1 year ago today ($15.67 v $17.23). Sure, it is down from its 52 week high ($24.34).
I would argue that SolarWinds should not be allowed to be in business in its current form, considering what a threat they have been to themselves and others in their mis-handling their software practices and subsequent breach. If an individual did what they did as an employee of the government, they would currently be in jail.
It is probably one of the most impactful national security events in our lifetimes and the impact of this event will be felt in certain areas for years or even decades.
I feel like we have to regulate this at a governmental level to get anywhere. We keep automating more and more of our society and its clear we're unable to protect it but the casuals don't get that and keep charging ahead and we enable them. The amount of power we gift to a given attacker seems to just grow and grow.
But how do we achieve political intervention when technologists and politics appear to be completely incompatible? The closest I've seen is the Pirate Party which never get more than a few percent or that democratic candidate (Yang was it?) and he was pretty fucking clueless on the tech when poked with any significant vigour.
It is certainly a difficult problem and as such, like most difficult problems, it will likely not be fixed in any meaningful manner. We will likely be talking about this exact issue in 5 years, 10 years, and 20 years from now.
Cyberspace Solarium Commission [1] created a robust and well documented roadmap for the Biden transition team to address some of these fundamental problems. IMHO, it is one of the better policy documents and has a number of really good recommendations that I believe would be extremely helpful. The #1 thing I think we could do is address accountability, who is responsible for the security of devices/software and what legal recourse should people have if the vendor doesn't adequately secure or support their products.
I think that there are a bunch of issues and one of the biggest ones is that what we say vs what we do are 2 different things. We also have issues where many of the core business practices that are commonly accepted are incompatible with building a secure and resilient infrastructure.
You don't need to be technical to pass laws for this stuff. Technology doesn't change the fact that underneath it's always greed/negligence/etc. These are things that have existed forever.
At least for home networking, I'll always pick something I can throw OpenWRT on over a managed service, subscription or closed-source option.
In the 15 years I've been using OpenWRT, I have never been disappointed with it, and I don't have to worry about some company's "secure" backdoor into my network being exploited.
Mikrotik hardware if you're looking for hardware you can upgrade.
I haven't found the need to upgrade my hardware in a couple of years so I don't know what the market currently looks like. I'd just look on the OpenWRT wiki or forum and see what is best supported and buy that.
Also, Atheros radios are generally supported really well on Linux, so I stick with hardware that has an Atheros chipset over something with a Broadcom radio.
Well, guess I won't be about to drop a few thousand on Ubiquiti gear anymore until we get some more details. Hopefully this account isn't fully truthful, otherwise Ubiquiti has really screwed up.
A few months ago I was considering outfitting my apartment with Ubiquiti gear but ultimately decided to stick to an aging AirPort Extreme and a couple of cheap ethernet switches after seeing reports of bugs with various Ubiquiti pieces. Seems that was a good judgement…
Ubiquiti is another one of these companies where if you did nothing but read about them on HN, Reddit, et al, you would think they're filing for bankruptcy tomorrow, set orphanages on fire, kill puppies, etc. The negative hyperbole around this company is something else, hack or not. And yet, all they do is thrive...
It's a long-tail if I had to guess. In my "circle" of coworkers almost every last one has ubiquiti today, and every last one is planning to replace it with something else when they make the jump to WiFi-6.
Maybe we're the anomaly, but I have a feeling 2 years from now if they continue down the path they're on, their earnings will not be quite so rosy.
You'd have lost that bet already. One of them switched to Aruba last week. I've already replaced several pieces of ubnt gear as well and posted for sale on ebay. The APs I'm holding off until there are some solid WiFi 6E options.
I know of at least two others that currently have hardware on order to replace existing ubnt routers with OPNsense so you can add them to the list by the end of April.
The hardware is very cheap and the market for their products is thriving. In fact it's possible to put custom software on it actually without using their cloud.
> if you did nothing but read about them on HN, Reddit, et al, you would think they're filing for bankruptcy tomorrow, set orphanages on fire, kill puppies, etc.
Seriously I'm just tired of it. Do you know how many tech geeks over the last few years have proudly proclaimed online that the company is "going downhill" and they'll never buy any more Ubiquiti products? 50 billion, that's how many. How many follow through? Evidently zero. It's comical. The hack obviously not good, but GMAFB.
Interesting to see what Troy Hunt does next considering they send him free stuff[1] and he speaks highly of them. He's so far only said it's "obviously a really bad look"[2]
I’m not holding my breath. Troy is a consultant. If they sent him that much free gear, what, he’s gonna backpedal and say “I’m removing everything UBNT out of my network”? Definitely not. “That’s a bad look” is a understatement for the giant cluster that this is.
I’m willing to see what Ubiquiti will do to make it right before I switch away, because I have a local-only setup of EdgeRouter and UniFi APs that’s been absolutely great in the years I’ve had it, but this is really last chance saloon stuff now.
I’m looking for a proper post-mortem and the steps to make sure it can’t happen again, recommitment to local-only users and respect of the customer, and a step back from the push to cloud everything.
I looked into Ubiquiti years ago while trying to find a decent access point. Couldn't stand the thought of having to configure stuff "in the cloud" or running the then giant Java based controller locally.
Floundered some with random enterprise access points used off of ebay that either drew too much power or was still buggy (netgear was the worst).
Then I came across Mikrotik. Their hardware and conformance is somewhat dated, but I've never had anything run so stable. Haven't looked back and been going on 4 years now.
I wonder why their legal department would PREVENT them from saving their users.
What legal reason would exist for that? I thought legal would instead force them to save their users, since otherwise they would risk getting sued by all of them by all the damages caused or something.
> a source who participated in the response to that breach alleges Ubiquiti massively downplayed a “catastrophic” incident to minimize the hit to its stock price, and that the third-party cloud provider claim was a fabrication.
I'm sure their lawyers don't know anything about tech or forensics, but they know how buy shareholders time in a way that minimizes anyone's chances of going to prison or facing serious civil liability. If you ask someone in charge of hiring corporate counsel what they look for in a lawyer, they will flat out tell you "a good risk manager who understands discretion" which just means "someone who's going to tell us what we can get away with".
The regulatory system in the US is sufficiently dysfunctional that there is zero incentive for corporate counsel to even consider what's in the best interest of consumers.
> I wonder why their legal department would PREVENT them from saving their users.
Good legal departments understand that the company is there to serve the users and make them happy and operate within those constraints (even trading off possibly liability when it makes the products sell better).
Horrible legal departments will block anything that has even a smell of liability, even when it comes to sabotaging the product itself and hiding serious issues from users and employees.
I wonder how difficult it would be to implement a rudimentary controller for their APs. The WLAN configurations are just text files in the /etc directory. Getting feature parity would be a lot of work, but I bet the bar isn't too high for simple functionality. Most of the "magic" is happening in hostapd on the APs anyway.
I think you are wrong. I have been working on https://openwisp.org for some time and implementing a controller which is robust and can handle many different corner cases and offer good functionality and also ease of use is a challenge and requires several people working full time on it. Even simple functionality it's a lot of work, unless for simple you mean really trivial. If it wasn't hard, there would be many alternatives but as far as I know there aren't many.
I was definitely shooting my mouth off to some extent. I'd defer to your experience for sure. I took a look at your project pages briefly and I'm going to spend more time looking them later. It definitely looks neat, and much more "feature-ful" than I'd be looking for. I'm particularly interested in looking at your modular configuration system.
My needs definitely don't exercise corner cases. Most of the UniFi gear I've got out there is just running a single SSID w/ WPA-RADIUS and a RADIUS-assigned VLAN. Here or there I've got an SSID w/ a PSK and a hard-set VLAN. Nothing too fancy. Adopting new APs quickly and easily based on a "magic" DNS name, alerting when an AP disappears, and syslog to show association/roaming/disassociation events is about all I want. I'm putting Customer-owned gear in small offices w/ under 10 APs, rather than being a service provider.
Interesting information, thanks for sharing. Definitely doable by adding also the monitoring module, which would show you associated clients in a chart.
But it's not as easy to set up as Ubiquiti though. I hope in the future it will be.
The most disconcerting part for me is the fact that the attackers gained full access to one of the administrators’ LastPass account. I would love to know how that happened.
Don’t have time to dig into this right now, but I have a Ubiquiti WiFi AP at my home behind a NAT; does this breach mean my home network is vulnerable/effectively exposed to the Internet? Do I need to log off HN and deal with this now, or can it wait?
It depends. How do you manage said AP? The leaked credentials issue here is specifically in SSO Cloud authentication to Controllers, which are used to administer all the actual hardware devices. However, the devices themselves aren't affected. So depending on how, or for that matter if, you manage them you may be unaffected as well which has always been a major touted advantage of UniFi and has indeed proved true right with this very incident.
Your post seems to imply you have just that AP and that's it? If you set it up initially (putting the controller on one of your own computers temporarily maybe), and then just left it standalone from there on out you're fine. There is no need to have an active Controller for all the hardware to work as configured, a Controller is just needed to change configuration, collect real time statistics/send notifications, and do necessarily active things like run a guest portal.
If you are running a Controller, but you're doing entirely standalone on your own hardware (or your own cloud service for that matter), and haven't enabled Ubiquiti SSO cloud access, you're unaffected. That's how I've always run since I don't trust 3rd party cloud stuff for something like this, ever.
It's """only""" an issue for their cloud service, and apparently their "Cloud Keys" and "Dream Machines" as well since they pushed it on people some recent firmware. Which granted covers a lot of surface area, and Ubiquiti has pushed very, very hard (see advertising outrage from just a few days ago). But it's thankfully still not everything.
Thanks the detailed reply. As you correctly inferred, this is my situation:
>Your post seems to imply you have just that AP and that's it?
I recently moved to a house with a preexisting network, so I have only the AP itself set up with the Ubiquiti router/network controller still in storage. I use the mobile app to configure the AP. It sounds like the AP won’t phone home or open tunnels to their cloud by itself, so I’ll turn it back on for now.
Ugh. Guess I’ll just go wired for now and unplug the AP. Hopefully I’m only paranoid, but I really don’t like the feeling of a hole in the network with my family’s NAS and IoT devices.
Never again with the cloud-connected network appliances. Time to build a router from scratch, I guess.
You can run the AP locally with the standalone controller appliance in a container or VM[1]. Pretty simple, and doesn't require a UNBT login. Probably still worth doing a factory reset on your AP first, if you're paranoid like me...
Ran into this [1] issue with Ubiquiti and Stripe integration. Short story Ubiquiti integration insist on sending credit card numbers directly to Strip (vs using more secure method).
The issue has been there for 2 years -- which is beyond odd. When I've reached out to tech support the issue was effectively closed as known issue.
I was looking at upgrading my home networking equipment with Ubiquiti, but with the breach and the hidden advertisements in their products. I have ultimately decided against it. They have lost $1000s of dollars in potential sales (from me anyways).
Guess I will just have to go bargain hunting on the used enterprise market, or just ask my BigCorp networking team to see if they sell or give away any of their equipment and try to repair it myself. My only concern would be noise generation and power consumption since they were built for use in data centers.
I wish I could say I was surprised :(. Along with a bunch of other people who've used their products for a decade or more now, I've been watching the ever steepening downward spiral of the company really becoming noticeable over the last 3-4 years. In an academic way, it's actually been kind of fascinating to watch happen in real time over the course of years with fairly front room seats. Seeing the deepening technical debt (lots of very old hardware still sold as new with no replacements in sight, inability to migrate their frameworks or keep their sources up to date and more), bikeshedding ramp up and up, the forums start to fall apart, marketing starting to write more and more checks development couldn't keep up with and then that getting brushed under the rug (the SHD and it's dedicated security radio comes to mind), the forums getting nuked entirely in favor of a horrible New Web thing with even worse bug/feature tracking then before and there wasn't any proper one before, ever worsening stability, universally hated UI changes that would just get shoved through anyway, and on and on. It's been everything one reads about, "Ubiquiti's Burning Platform" and all that, and in turn seems like it should be avoidable. Yet on it ground with sickening inevitability. It's just now finally starting to reach critical mass and become visible to the more general public, spreading through the same tech grapevine that gave them such a boost in the first place.
But less academically it's depressing as hell too, because the grapevine liked them for good reason and there still isn't any drop in replacement. Their p2p/p2mp gear is still solid. And UniFi was a wonderful concept solidly executed. It also eschewed the subscription/cloud bullshit so many other players are chasing, which indeed is something of a saving grace here. While there is a cloud option, lots (if not most) people can and do run their UniFi networks completely self-hosted even for remote sites. The single pane of glass, ease of provisioning and recovery, etc made sense and saved time. And they had an incredibly enthusiastic and supportive community, like when they asked about moving L3 switching way back on the old forums (back when the rot was in its earliest stages and not clear yet) they got huge amounts of feedback, their beta testing had many people putting in a lot of good work.
Such a damn stupid waste. And the nature of the beast for tech infrastructure is that market signals are always behind the curve and thus muted until things are already getting to be too late. Robert Pera also owns the majority of their stock IIRC so there isn't any way to effect an outside management change there either. It is odd to me that nobody has sought to go after them directly and aggressively, though I heard rumblings late last year that Cisco was giving a go at something clearly aimed right at the UniFi market (no subscriptions like Meraki)?
At any rate, final straw for me on routing was the flop their "UXG" has been, I finally gave up at long last and began migrating everything to OPNsense a month back. And once the single pane of glass is broken, the barrier to start moving more drops in turn and network effects (harhar) begin to go into reverse. I'd still be happy if they somehow recovered, but if they do I think it'll be a long time. Problems that build for years tend to take years to reverse too, if they can be. I hope we get some stories someday internally on how it all went down.
I am extremely relieved none of our Ubiquiti devices are set up for this cloud shit. (We use the PtP stuff, not the APs, the cloud bits are optional there.)
Then again we have a "clear skies" policy & wouldn't have bought anything that requires cloud blah. (Which covers a whole bunch of other vendors too, looking at you Cisco "SmartLicense")
It's not just incompetency, it's malice, to treat your own customers in this fashion. But this is what happens when there is consistently no consequences for these kinds of breaches. Neither government nor market punishes these kinds of events in any meaningful (cost penalty) way. All the cost is shouldered disproportionately by victims.
I have some unifi camera and unified video on a Linux box, and they are phasing out unified video. I don't want to move to the cloud offering. Is there a way to use the hardware with open source software?
Also interesting and noteworthy is it appears that today, just 7 hours prior to this Krebs article, an investigation was launched into Ubiquity for potential securities fraud.
How can you see whether you have been effected or whether they have poked around your setup and maybe even left something behind? Theoretically you can’t really trust anything on your network anymore.
It seems naive to want to talk to the press under a pseudonym — Adam, in this case.
When looking for leakers internal security auditors don’t need proof you are Adam in order to fire you. They just put enough pressure on the most likely Adams such that they quit.
You will be one of them. If another Adam does so, so be it. Your actions likely flushed the other leaker when you thought you were the only one. You won’t be able to handle the pressure. Neither could she.
Verkada, now Ubiquiti, yikes. Also according to this leaker, it seems like they tried to cover it up before letting the public know. They are on my blacklist now.
> Ubiquiti’s stock price has grown remarkably since the company’s breach disclosure Jan. 16. After a brief dip following the news, Ubiquiti’s shares have surged from $243 on Jan. 13 to $370 as of today. By market close Tuesday, UI had slipped to $349.
Until these companies are held massively accountable for such negligence, nothing will change. Similar to what happened to Facebook and all they had to do was pay chump change fines.
I have a TP-Link Archer C7, a Linksys WRT3200ACM, and a Netgear R7800. I've used all 3 as my primary device on my network running OpenWRT.
I bought the TP-Link first. It worked, but it's an underpowered device and it was struggling to keep up with all the devices on my network. It also has a MIPS chip so it couldn't some ARM only software I wanted to use.
I replaced it with the Linksys. I had nothing but problems with it. It was fast and reliable using the Linksys firmware (but functionality was severely limited). When running OpenWRT it was a buggy disaster. One example problem, it would randomly start dropping bonjour packets for no explicable reason thus preventing my wife from being able to print from her iPhone. It had to go.
I was about ready to give up on my OpenWRT dream, but I took one last chance and bought the Netgear for cheap off of eBay. It's great. It's fast, it's reliable, and so far it just works (been running it for a year now).
So the C7 is good if your needs are limited, but I really do recommend the R7800. It's a very nice device and you can probably find it for cheap on eBay like I did.
T-mobile sold a bunch of rebranded asus routers a couple of years back that are still excellent today and can be had for pretty cheap. Comes with some shitty tmobile spyware I think, but you can flash openwrt on it.
Speed tests are pretty unreliable, but the peak unobstructed wifi speeds I've gotten from that have been better than what I get from my Unifi 6 lite, which supports wifi 6, even on wifi 6 devices. (couple hundred mbps on a home gigabit plan from Nazi Germany I mean Comcast)
Thanks for the tip. This is a rebranded Asus RT-AC68U and in the $200 sweet spot. Unfortunately it's a Broadcom-based device which means OpenWRT support is limited, but apparently DD-WRT has better support[0]:
> DD-WRT has a license agreement and NDA in place with Broadcom that allow usage of better, proprietary, closed source wireless drivers (binary blobs) which they are not allowed to redistribute freely.
Is there a market for good networking equipment? If Ubiquiti was it and it's gone, and reading this thread there are no good alternatives, then it sounds like there is an opportunity for a new company.
> Ubiquiti’s stock price has grown remarkably since the company’s breach disclosure Jan. 16. After a brief dip following the news, Ubiquiti’s shares have surged from $243 on Jan. 13 to $370 as of today.
This whole thing shows how tech such as passwordless, device trust, approval flows, should be in place at basically any company. And your cloud accounts need to be hooked up to your SSO with said features.
IMHO there should be a default paragraph text font size specified in the browser settings and all the other styles should be derived from it given just coefficients specified in the page CSS.
I reached out to Ubiquiti because we never got an email to rotate our passwords, and they told me I wouldn’t get an email unless I was using “Ubiquiti verified SSO.”
Yeah my few Unifi devices (and the controller SW instance) are already restricted to their own VLAN, but I'm going to disable outgoing internet access as well.
I use Mikrotik (or OpenWRT) for routers, but Mikrotik is not that good on WiFi. Peeople recommend Ruckus, but it's pretty expensive (and not that easy to get second hand in Europe, or Spain at least).
Is there any (good) brand with pricing between Mikrotik and a Ruckus that doesn't need a cloud connection?
Only if you have the newer Cloud Key or Dream Machine. The older Cloud Key isn't fast enough to handle the new OS (which ended up being good in this case, since it's still getting security updates).
I have not looked at Protect yet, however for Network you can disable remote login after creating a local account. Open Network app on IOS and make sure you go the main page that list the controllers. Click on the arrow. Next screen you will see a section called “Launch Type” which list all the access methods, local IP 4, IPv6 and cloud. Pick the local IP adresss.
Not sure if still the case, but last time I dug into it, eero was also the only consumer grade software-defined-radio router/ap, allowing them to rapidly patch for various vulns that others couldn’t necessarily or took much longer for.
No, they don’t. Ubiquity literally covered up a giant security breach to avoid backlash while putting every single customer at risk for 3 months. Imagine - for 3 months someone had direct access to your entire network and you didn’t know.
What's your exposure if you had a cloud key enabled for remote access, but now disabled? Sounds like anything is possible if they compromised the cloud key (which is a device, not a "key")
The APs and switches are stateless by design (which I sort of like), but if you make CLI changes on the controller using the config file they are not reverted in my experience.
Though it's not super well supported either because they prefer people using the web UI to the config file.
> “They were able to get cryptographic secrets for single sign-on cookies and remote access, full source code control contents, and signing keys exfiltration,”
Maybe putting your network control plane in 'the cloud' isn't such a good idea after all...
Edit: Just re-read the article, this part stood out:
> the attacker(s) had access to privileged credentials that were previously stored in the LastPass account of a Ubiquiti IT employee, and gained root administrator access to all Ubiquiti AWS accounts, including all S3 data buckets, all application logs, all databases, all user database credentials, and secrets required to forge single sign-on (SSO) cookies.
> Adam says Ubiquiti’s security team picked up signals in late December 2020 that someone with administrative access had set up several Linux virtual machines that weren’t accounted for.
If this is true, and whoever breached them had full access to their AWS account, can we really trust them to clean up all their tokens and fully eradicate all forms of persistence the hackers may have gotten?