> “The breach was massive, customer data was at risk, access to customers’ devices deployed in corporations and homes around the world was at risk.”
> “They were able to get cryptographic secrets for single sign-on cookies and remote access, full source code control contents, and signing keys exfiltration,”
Maybe putting your network control plane in 'the cloud' isn't such a good idea after all...
Edit: Just re-read the article, this part stood out:
> the attacker(s) had access to privileged credentials that were previously stored in the LastPass account of a Ubiquiti IT employee, and gained root administrator access to all Ubiquiti AWS accounts, including all S3 data buckets, all application logs, all databases, all user database credentials, and secrets required to forge single sign-on (SSO) cookies.
> Adam says Ubiquiti’s security team picked up signals in late December 2020 that someone with administrative access had set up several Linux virtual machines that weren’t accounted for.
If this is true, and whoever breached them had full access to their AWS account, can we really trust them to clean up all their tokens and fully eradicate all forms of persistence the hackers may have gotten?
Was shopping for alternatives to my Ubiquiti last night. Seems like there is nothing good out there. Engenius has shit hardware and a cloud controller. Aruba has a cloud controller AND you have to pay for a license. Cisco makes you pay for a license. TP-Link is cloud-based.
WTF. Does anyone have a decent WAP where I can use PoE, deploy like 5 of them and have them support roaming between APs, all managed locally? Is that too much to ask?
Disclaimer: worked for Meraki (now Cisco Meraki) for several years.
Generally, halfway decent wireless APs are all targeted at the enterprise market. Consumer hardware is a brutal race to the bottom, as lay consumers aren't qualified to compare options based on anything but price and UI. Ubiquiti was an outlier in trying to bring enterprise features to the consumer market
The problem for enthusiasts and small business/home office setups like yours are that both the enterprise market (e.g. Meraki) and the premium consumer market (e.g. Google WiFi) focus heavily on ease of management - cloud controllers are table stakes these days, not a controversial feature. Part of that premium that Meraki, Aruba, and that class of enterprise supplier charge is about having a trustworthy and secured backend.
Note, however, that roaming between APs is a feature of the 802.11 standard; you just need to have all your APs on the same layer 2 (802.x) network, and using the same SSID and credentials. No fancy hardware required, and you can even mix and match vendors.
My personal experience with Meraki has been the very definition of vendor lock-in.
The security appliance was relatively cheap, then we saw the fine print that the total bandwidth was artificially limited and increased only adaquetly two product levels up. Sorry Mr BubbleTime, you need to buy a new applicance and a new license. Your old one is worth nothing and non-transferable, watch it rot.
The switches seem absurdly expensive when you consider the 5-7 year licensing costs. And the quality is poor at best considering Meraki went and pushed a firmware update that bricked every fan in every 48 port switch we had. But you have the security appliance so it “only makes sense” to pay for these switches.
We had an IPSEC incompatibility between a vendor with an ASA and our Meraki gear. The solution was to buy a Cisco device just for that one connection.
All in all, it’s passable, but because of the lock-in it’s not like I have a cost effective choice to get away from it. I wouldn’t chose it again.
That said, it does offer a mediocre IT tech a single pane of glass they have to try to mess up.
Of all the Meraki factors I’ve learned and considered, that it is cloud-based is the least important towards my recommendation or lack of. There are lots of people that would be happy to explain all the ways my experience is wrong, but whatever.
Completely agree with the lock-in, and they aren't the best / featureful device out there. It seems the sweet spot for them is places with LARGE distributed footprints (such as retailers), where you can have very simple networking (some back to HQ, the rest to internet).
It fits well with being able to rapidly bring bodies into a project and implement change X across hundreds of stores, while having a standing IT team of 5.
If you have onsite (fulltime) IT, its likely not the best option.
Is there a community for this kind of discussion at this point? When I was an admin, and then later working in networking in the 2000s, there were tons of very active mailing lists, not just for hardcore networking but for IT-oriented stuff, mostly all faded to a shadow of their former selves.
I'd be particularly interested in comparisons of Meraki/Mist/etc. for small enterprise and campus.
Some of the relevant subreddits have decent discussions from time to time. The grandfather is /r/networking, but if you look at its sidebar, there's a long list of other subreddits for more specific subjects and individual brands. Stick to the subs for professionals rather than minor home network issues and you'll find quite a few knowledgeable people and plenty of anecdotes both good and bad about different brands etc.
"Cloud-based" is the implementation; the killer feature is the single pane of glass. It's just hard to implement that without putting a bunch of logic in the cloud.
Last I worked at Meraki was 2015; I don't remember any artificial limiting of bandwidth at that time.
"Cloud-based" is the implementation; the killer feature is the single pane of glass. It's just hard to implement that without putting a bunch of logic in the cloud.
Hard in what way? As long as the control traffic has paths between all relevant devices over the management LAN, why does the cloud need to be used at all?
1. Putting the management UI on a local system requires some custom networking setup, and is full of security footguns.
2. Most customers who want this have multi-site setups; in that case, you need paths across the public internet too. Again security footguns, and also reliability ones.
3. Remote work is very very common for IT people.
4. Recovery from configuration mess-ups is harder if your control plane has to run on the same network that you've messed up.
There are on-site controllers available. They've just lost out in the market because of the amount of in-house IT expertise they require. No one wants to deal with that shit, and outsourcing the security and reliability problems to a specialized third party is usually a good idea.
This looks like an enterprise perspective. For smaller organisations operating on a single site, some of these concerns won't apply. I also think you're being a little one-sided there because cloud-hosted configuration has its own risks in terms of security and accidentally cutting off your management access, many of them directly analogous to the ones you mentioned, plus you have all the usual concerns about any critical system that depends on Internet connectivity to work properly. At the end of the day, nothing is more reliable than local wired networking, and nothing is more flexible for disaster recovery than having someone physically on-site.
In the prosumer to small business segment, I would argue that there is still enormous potential value in being able to configure all of the network gear from a single GUI, not least because it doesn't then require a lot of in-house networking expertise to get something going that works and is reasonably secure.
> also think you're being a little one-sided there because cloud-hosted configuration has its own risks in terms of security and accidentally cutting off your management access, many of them directly analogous to the ones you mentioned,
But with a cloud-managed system you have a professional, single-purpose organization dealing with those challenges. Which you are getting for the rock-bottom price of your licensing/support plan. Building a good internal IT organization is hard and expensive, and most businesses have other things to do.
> plus you have all the usual concerns about any critical system that depends on Internet connectivity to work properly.
Generally these systems only need internet connectivity to change the configuration and for some monitoring features. In practice, customers are okay with these being unavailable during internet outages as long as both the management platform and the ISP are on a pretty strict SLA.
(Compare, for example, the usual downtime from your 1-4-person IT team not having someone with the right skills on call.)
> and nothing is more flexible for disaster recovery than having someone physically on-site.
Who has the cash for that?
> In the prosumer to small business segment, I would argue that there is still enormous potential value in being able to configure all of the network gear from a single GUI, not least because it doesn't then require a lot of in-house networking expertise to get something going that works and is reasonably secure.
That was my original point: "Generally, halfway decent wireless APs are all targeted at the enterprise market. Consumer hardware is a brutal race to the bottom, as lay consumers aren't qualified to compare options based on anything but price and UI. Ubiquiti was an outlier in trying to bring enterprise features to the consumer market"
I don't know what your standard for a 10-to-50-employee small business is, but "point your browser at this IP address" is usually beyond their in-house technical skills [1]. Small businesses whose core competence is software/networking, or who by coincidence have that expertise in-house, are a tiny niche market. No one [2] cares.
[1] See for example the rise of the Managed Service Provider, which was a large and growing subsegment for Meraki back in 2015 or so. Showing up, installing the hardware, setting up the wireless, and then managing it from your office a few miles away is a big business opportunity, and is a much more efficient use of limited skilled IT labor.
[2] No one with substantial resources and a profit motive.
OK, with tongue firmly in cheek, I will try to reply to your points from the perspective of the small organisations I was talking about.
But with a cloud-managed system you have a professional, single-purpose organization dealing with those challenges.
Just to be clear, are you thinking of the professional, single-purpose organization we've been discussing today in the context of a catastrophic data breach, the one we've been discussing in the context of incompatibilities with other vendors, lock-in effects and expensive licensing, or a different one?
Generally these systems only need internet connectivity to change the configuration and for some monitoring features
So as long as the equipment is set up exactly how we need it and never needs to change or be checked for any reason, everything is good. It's hard to imagine why these devices need a UI at all, when the engineer who installs the equipment could just set it up once and then you're done.
In practice, customers are okay with these being unavailable during internet outages as long as both the management platform and the ISP are on a pretty strict SLA.
John: Bob, the Internet is out again. Who do I call at the ISP?
Bob: We don't have a dedicated contact, it's just the business support number on their website.
John: I'm in the queue, at number 17. What's our maximum time for someone from the ISP to contact us about an outage? That might be faster.
Bob: No-one will call, but if it's not back by next business day we do get £50 off next month's bill.
(This is roughly how that conversation probably goes when you're a 20-person organisation with two floor of an office building on a business park outside a small town.)
(Compare, for example, the usual downtime from your 1-4-person IT team not having someone with the right skills on call.)
What's an IT team?
Who has the cash for that?
What cash? When we have a new starter, John or Bob sets up the WiFi on their laptop and company phone and adds those MAC addresses to the whitelist for the network. Normally John works in development and Bob works in sales, but they do know a bit about networks so this is fine. Well, as long as they can get to the GUI, anyway.
Small businesses whose core competence is software/networking, or who by coincidence have that expertise in-house, are a tiny niche market. No one [2] cares.
And yet as someone who has worked for software development businesses for an entire career and whose customers/clients have mostly been other relatively small organisations of one type or another, I have never met one that didn't. Of course that could be because I've tended to work with other technically-inclined businesses, but the same is true even for schools or my own business's accountants. I'm not claiming this is some sort of universal truth, but I don't think the market is nearly as tiny as you're suggesting, at least not in this part of the world (the UK).
Remember, we're probably not talking about setting up encrypted WAN tunnels across continents and multiple layers of switches in a data centre here. We're more likely to be talking about getting an Internet connection with suitable firewall set up, connecting a handful of switches and APs and making sure everyone knows the WiFi password, and installing everyday software on the staff PCs and mobile devices with maybe some basic configuration and enabling updates.
[1] See for example the rise of the Managed Service Provider, which was a large and growing subsegment for Meraki back in 2015 or so. Showing up, installing the hardware, setting up the wireless, and then managing it from your office a few miles away is a big business opportunity, and is a much more efficient use of limited skilled IT labor.
They're not unheard-of here, but again, in my experience such arrangements are far less common in smaller organisations than just having a couple of people on the staff who also "set up the IT" and know enough for the kinds of everyday admin tasks you're talking about.
> What cash? When we have a new starter, John or Bob sets up the WiFi on their laptop and company phone and adds those MAC addresses to the whitelist for the network. Normally John works in development and Bob works in sales, but they do know a bit about networks so this is fine. Well, as long as they can get to the GUI, anyway.
"Small businesses whose core competence is software/networking, or who by coincidence have that expertise in-house, are a tiny niche market."
You have that expertise in house. Having looked at sales numbers and market research for a company that sold internationally and cross-industry: yes, your experience is very unrepresentative.
> even for schools...
Tangent: schools are honestly pretty technically sophisticated! We sold to some of them at Meraki, but they were drawn to us more for labor savings than to compensate for limited expertise. Education customers typically had very few (especially in perpetually-underfunded US primary and secondary schools), but very competent, IT people. They were feature-hungry power users.
In part that's because, even with low employee headcount, they have to provide a surprising level of IT services per student as well. A school with 80 employees and 1000 students probably has the IT workload of a white-collar employer with 500+ headcount.
You have that expertise in house. Having looked at sales numbers and market research for a company that sold internationally and cross-industry: yes, your experience is very unrepresentative.
OK, let's assume that's true for the sake of discussion. According to your market research and sales numbers, what is the big market for these cloud-managed products among smaller organisations, and how do those organisations generally manage their IT facilities?
1. Use low-cost consumer hardware with zero centralized management, and set it up with the same expertise and judgment as your typical residential deployment.
2. Have one admin person with the wherewithal work with web UIs, and wants a simple setup-and-forget system. UI not much more complicated than a single-AP residential deployment, user management workflow no more complicated than adding a G-Suite user. If they can use the default password for the admin system, they will (which e.g. Meraki and Aruba don't have in any meaningful sense).
OK, so let's look at the second of those, since the first is consumer level and not really our target market for professional grade networking equipment.
Your original contention was that it's hard to implement a single pane UI without putting a bunch of logic in the cloud. If our hypothetical one admin person with some idea of what they're doing, together with any automatic assistance the relevant devices provide, can set up enough local networking that all of those devices can reliably access the Internet and support cloud-based configuration, then a similar process can set up those devices to support single pane configuration using the LAN only.
At that point, looking back to the four "hard problems" you enumerated a few comments ago, I still don't see a strong argument for needing the cloud dependency.
The risks around network setup and reliability don't seem any worse for LAN-based configuration than cloud-based. In fact, LAN-based clearly has an advantage by not relying on any external infrastructure. It also has the advantage that if you want to get more serious for a larger deployment, you can run independent cabling and create a dedicated management network for control signalling, while most places aren't going to have an independent second Internet connection for management traffic if you accidentally break your configuration so your main data network loses Internet access.
Managing multiple sites is probably a non-issue at this level of the market.
Remote access for IT/support people is easily provided if necessary by having safe and easy VPN setup as part of your user-friendly interface. This has the added advantage that your tech people can also reach any other parts of the network they need, and so you might have required this functionality anyway. And if it's locally configured, you can always quickly shut that VPN access off again in case of any security worries, without needing anyone else's remote systems to be working properly before you can secure your own in an emergency.
In actual deployments and support situations I saw at Meraki, connectivity from individual hosts to the internet was usually the most reliable part of the network.
At this point, it feels like the reasons to use or not use Cisco for networking are much the same as the reasons to use or not use Oracle for databases. I'm not sure it has much to do with the technology in either case any more.
> Note, however, that roaming between APs is a feature of the 802.11 standard;
In theory yes, but man do a lot of devices have terrible roaming heuristics.
"I can still see beacons so id better stay here even though i havent received a packet in the last minute. Wouldnt want to pay the time cost of associating with that other BSS that has 5X the signal"
The key issue is the protocol seems to have no ability to associate with multiple BSS's together.
It's so nearly there. The power management stuff means that even with single a physical radio one can associate with multiple BSS's on different frequencies by telling one BSS to hold packets for you while tuning in to the other frequency.
All that's needed to make it reality is a way to tell a BSS "If I fail to ACK a link layer packet, please forward it via the wired network to this other BSS to send to me instead".
Then a client could be connected to multiple BSS's, send packets via either, receive packets via whichever one it is currently tuned into, and not lose any packets while switching.
You can fix this on the AP side with minimum RSSI or data rate control. But that would probably push you over to either Ubiquiti (and the similar “cloud based” options) or the enterprise market to get those features, unfortunately.
Have you tried setting your transmit power low (just enough to get good signal to the places intended, but definitely no more than your devices can trasmit) and increasing the minimum send rate to something reasonable (say 10-40 Mbps, beacons use minimum rate)?
It should help high power bad signal (some devices use fixed thresholds) and equalize the beacon vs. data reception quality.
I don't think openwrt had data rate config in webui, but it does support the setting in the config files (that I normally scp onto a device). The following seems to work:
/etc/config/wireless:
config wifi-device 'radio0'
...
option txpower '1' << 1mW (more than enough for 1 room)
option legacy_rates '0'
list basic_rate '24000 36000 48000 54000'
list supported_rates '24000 36000 48000 54000'
This messes with your AP placements though. Depending on your AP placements, you may or may end up with deadspots. You need to be sure that your AP placement is sufficient when taking this strategy. And yes, I take this strategy too.
I went through this when setting up wlan in a new office some years ago, looked at roaming APs etc.. finally I just bought 4 consumer Asus routers on the same SSID, worked fine for all our purposes at least.
Do people _really_ need wifi roaming in their homes?
I have multiple cheap APs setup in my house using the same SSID and it's fine. As long as I'm not holding a realtime conversation and moving around between APs I never have any problems. And since I almost never hold a Skype call while walking through my house I almost never have any issues.
You don't have stone walls. And you haven't spent the last year working in a study that's located between two APs, where clients flip now and again and Zoom would tear down the connection.
Of course you could say: Does the house have to be designed that way? Do the APs have to be located where they are, is it really necessary to have that stone wall, is it necessary to put the study in the place where it is, is it necessary to have that noise insulation around the elevator? None of that is necessary, but some Mikrotik hardware was much cheaper than getting rid of a stone wall and more pleasant than having to hear it when the neighbours use the elevators.
Yeah I have a 2' thick stone wall in the centre of my house (old exterior wall). I have an AP on either side of it as they penetrate the ceilings/floors above fine, but nothing is getting through that wall and maintaing good signal.
Just brick that's old enough will do it. Mine's something like 150 years old, and it's absolute murder to drill into, just incredibly hard, and it's either dense enough to act like stone, or it's absorbed enough moisture over the years to look like a faraday cage to wifi.
Yep, stupid L-shaped house where the inner curve is a damn Faraday Cage. NOTHING goes through.
If I'm in the living room and need to move to the other end of the house to get away from family-related noise, the device needs to roam between two APs.
Even if you don't "need" roaming having more coverage lets you dial down the power on all of your APs, so you can get much closer to the theoretical maximum throughput.
4 floors, 150-year-old brick, random steel girders in annoying places, and a broadband line that comes into the building at almost the least convenient place possible. Yeah, I need roaming.
> Note, however, that roaming between APs is a feature of the 802.11 standard; you just need to have all your APs on the same layer 2 (802.x) network, and using the same SSID and credentials. No fancy hardware required, and you can even mix and match vendors.
Not exactly. There are extensions to pre-authenticate with an AP (802.11r) for truly seamless roaming without packet drop or delay and for AP controlled roaming (802.11k) where the current AP tells you your options to roam to. This last one is important because the AP has generally better information about the network than the client and because the clients are not that great at managing this.
I am sure there are other extensions too, but afaik cheap APs don't implement these.
The base standard's behavior requires a reassociation to the new cell (i.e. AP i.e. BSSID). This introduces a gap in coverage, but for simple setups like the 5-AP one IgorPortola is talking - I assumed that this was using shared-password auth - the gap's length is functionally 0. 802.11r gets rid of that gap, which is important when using heavier-weight authentication protocols like 802.1x.
(Note that by 802.x in my original I meant not 802.1x, but rather the set of standards including 802.3 (ethernet) and 802.11 (wifi))
Ubiquiti had a secured backend - their screw-up was not doing MFA on their admin accounts. I would still like if there was an option for a local-only control panel.
If admin login is using weak credentials, it is by definition not a secure backend. Password/credential management and mandatory MFA are ALWAYS part of security due diligence for suppliers.
There are way to limit the scope of those. One set of credentials per environment for example. You can also limit the use of the these credentials by policy.
The cloud controller is a (surprisingly heavyweight) service that manages a network of unifi devices. It can run on a raspberry pi, or an x86 container / vm.
If I wanted to run it all the time, I’d try putting it in a docker container on my synology.
Instead, I have an sd card for my raspberry pi that has nothing but the controller installed. The main downsides to this are that it is easy to lose the sd card, and that the controller gathers bandwidth/usage/wifi connection reliability stats, but only when it is running. I don’t get those unless I boot up the RPi to diagnose some network issue (this has never been an issue in practice).
One advantage of the RPi setup over a synology container is that it has both a ethernet jack and a wifi adaptor. This is surprisingly helpful when bootstrapping complicated mesh topologies.
I have a UDMpro which self-hosts a controller, thou personally if i knew it couldn't be joined to another controller i'd have gotten something else so i could throw it in docker (which runs on a NUC with the storage off a synology)
Gosh. I wish I knew. This thread is rife with alternatives, so other's guess is as good as mine. The unifi wifis I have running are still good and work extremely well. So my suggestion is to keep using them, but only if you host the controller software on your own hardware (I'm using RPi 4 as stated) and only if you avoid their cloud solution(s). (This IMO).
I am still looking for alternatives when the time comes to replace mine. Which I'll be forced to replace once/if they completely nerf the self hosted on self hardware options.
The Ubiquiti controller is not needed for general operation, unless you're using a guest hotspot. Otherwise if it's offline you just lose ability to do configuration and it's data/stats logging.
Hah, that's a dream world where enabling/disabling SSID's ever worked properly.
They have a good UI, good hardware but the software seems half baked.
Originally with the switch to the "new settings", the schedules were switched between the AP's and the UDM, not sure about a dedicated cloud controller.
Still lots of pitfalls with just MFA. Text/email being the worst and TOTP being somewhat better but not great. A lot of password vaults support storing the TOTP secret so they can generate time based codes which seems reasonable when the vault is 2-3 factor protected (some do IP heuristics, passwords, tokens, PINs, etc). Unfortunately if someone gets access to the vault in it's unencrypted state you're in for a world of hurt.
Even with hardware tokens, if someone gets access to your machine while you're using it they can wait til you authenticate then use the creds proxying requests through your machine so they look legit
I run a local controller with no remote access for unifi - i would never use any networking hardware that needed a cloud controller/connection for breaches exactly like this.
Wow this is great and seems like a direct competitor to UniFi. Few years back when I was researching meraki I found it way too pricey for small business over UniFi but this makes much more sense now.
With standard 802.11 roaming, you have to reassociate and reauthenticate to the new AP. While this process is underway, you can't pass any traffic. For open networks or simple auth schemes like WPA2 single-password, this isn't very noticeable; however, for heavier-weight auth schemes like 802.1x this pause is substantial and is especially noticeable on voice/video calls. 802.11r is a scheme for caching the authentication info, letting you avoid the 802.1x round-trip to a central auth server.
For a 5-AP network, usually with shared-password WPA2, it's not necessary.
Yes, roaming by sharing SSID and passcode is a world of pain. 802.11r solves all those pains, I've been using it on OpenWRT for months without a glitch.
Yes, it's why I use 802.11r. It works with most devices, although the one which does not support it makes me laugh. Nintendo Switch will not switch from one AP to another. It holds on, tooth and nail, to whichever BSSID it used when it first connected.
My kids have to go into settings, reconnect, and move on.
I have a couple of AP AC Lites running openwrt and 802.11r, works fine except on Xiaomi phones apparently...
I never tried the unifi though, flashed openwrt within 15 minutes of receiving the APs
Pretty much that. It's also very simple nowadays. you just tick the box on the Wireless Security tab, and check that the mobility domain match between all the APs - it should by default, I think it's derived from the SSID.
Be aware that there might be compatibility issues. I enabled it on a pair of OpenWRT-running APs, and the handoff worked fine for my laptop, but my phone would claim to be successfully associated/authenticated with the new AP, but traffic wouldn't flow. Turning off 802.11r fixed the issue completely, and it turns out I don't really need it after all, as my devices seem to roam properly and the reauth is pretty quick.
We use Meraki MR/MX stuff at our office and are generally happy with the value & service. The MS stuff though, thats another story. Do you guys have plans to enter the sub $2K tier with L3 devices?
I haven't worked at Meraki since 2015; sorry, can't help you out on that one.
I will note that as of 2015, "L3 switching" (i.e. hardware-accelerated IP routing) hardware was expensive as hell. I believe that on the software side, dropping new hardware into the existing hardware-routing infrasturcture is fairly easy, but I don't actually know because I didn't work much on MS hardware.
So the question for becomes: is there just not a good enthusiast market for this stuff? I have met a number of people who are "network nerds", so I'm inclined to think the market does exist. With any of the plethora of consumer devices (Linksys, Netgear, D-Link) it's a dice roll whether your gear is complete garbage or not. A lot of the time, you're coming up snake eyes.
I've got some Ubiquiti gear I bought a couple years ago. Like you, I want good quality gear that I can manage myself. I don't need a bunch of fancy corporate garbage, like link aggregation or cloud management. Give me solid, hardware accelerated routing and switching, flexibility over my local DNS, and maybe some VLANing.
I was running Linux on a small x86 box as my last network router. Maybe it's time to get back to that. That or go back to banging rocks together. Haven't decided which, yet.
? So the question for becomes: is there just not a good enthusiast market for this stuff? I have met a number of people who are "network nerds", so I'm inclined to think the market does exist.
my experience as a professional "network nerd" is that most other people in the networking field run cheap/second hand enterprise gear fetched from their employer at a major discount and simply seem to care less about wifi in general.
A lot of that changed with my peer group either due to caring about managing from a phone or caring about power/noise. The latter are especially not things real enterprise gear tends to optimize for.
The wireless is something for guests, and is hacked together with something you know works with an open router OS, or something off-the-shelf on an isolated VLAN.
That kinda thing yeah, at least myself and other engineers I’ve compared notes with.
I picked up a pair of Aruba 3200 controllers and a bucket full of APs on a local auction site for a song years back, still does me fine. Then again, not caring about the fastest latest standards is key, if you’re chasing current gen the enterprise stuff is unaffordable. You do need the appetite for a bigger power bill, mind.
I can't imagine that there isn't a market for this. Look at the number of people recommending Ubiquiti stuff to each other. There are entire YouTube channels dedicated to it. If your whole living space or small office can be covered with a single access point, get a 3-in-1 combo that has a WAP, a router, and a small switch. But if you don't, you are left with, what exactly? There is also some demand for mesh stuff, for people who rent and don't want to run Ethernet cable.
My plan: OPNsense on a PC Engines board for router + firewall, an unmanaged PoE-providing switch for switching, and something from 2-8 WAPs for indoor/outdoor Wi-Fi.
There were/are some performance implication of pfSense/OPNSense on these boards specifically. It seems like this has improved significantly in FreeBSD 12+.
> APU2, APU3 and APU4 motherboards have four 1Ghz CPU cores, pfSense by default uses only 1 core per connection. This limitation still exists, however, a single-core performance has considerably improved.
I can saturate 1GB/s with no problem OoB on Debian/OpenWRT on APU2/3/4, ymmv
I had a PC Engines board for awhile and I really liked it, but make sure the one you order can support your internet bandwidth. When I upgraded to 1 gig internet, I was pulling around 450mbps on my PC Engines apu1d4. I ended up getting a Ubiquiti Unifi Secure Gateway and then I was able to pull the full 1 gig.
It's pretty hard to recommend Unifi based on how they handled this breach, but the hardware itself has performed very well. Hopefully the new PC Engines boards can accommodate your needs.
You can connect the Google mesh routers together with Ethernet. I’d guess other competing products will do the same. It’s cheaper and much simpler than a full Ubiquiti setup for a few access points.
It's got a quad-core i5. I run Proxmox and virtualize VyOS as a router, Home assistant, and a couple of other small things like an https reverse proxy for various services that I like to access remotely.
Went this route after my old OpenWRT router couldn't keep up with gigabit WAN. This box has no problems doing so, and even does WireGuard at near wire speed.
There are a bunch of similar units available on Aliexpress, as well as 1U units with x86 CPUs and SFP ports for 10GbE, etc.
They’re small passively cooled embedded x86 machines. They haven’t made the jump to 10GBit, and their newest model (the apu2) is getting pretty old. However, they have very long production timeframes (many years) for each board config, which leads to stability over time.
as you said, it's an embedded solution, and it's cpu power is borderline for gige speeds, if you want more than the bare minimum (fw/nat) like qos, dpi or some virtualized services.
I have an ER4 which works for now but plan to go down the custom route once the ER4 is unable to push packets quickly enough. My hope is that VyOS/DANOS is sufficiently stable by then to run as a VM on say a Odroid H2+ replacement (or something similar)
Does this type of setup support a mesh network with multiple APs and SSIDs, VLANs, etc? I have never seen a PC based all-in-one interface that supports all of these things the way Unifi does...
> So the question for becomes: is there just not a good enthusiast market for this stuff?
No. They just don't want to serve the low end. I'm from SK, Canada and the vast majority of all businesses are small businesses. This site [1] says 98%. The problem is they only account for about 25% of the GDP, so vendors don't consider them worth serving. Everyone wants to sell to the 2% of the businesses that make up 75% of the GDP.
There's a lot of money to be made in the small business sector. It's just not *enough* money for huge tech companies.
I've thought for a while that the neglect of consumer, prosumer, and small business computing is a side effect of concentration of wealth. A small percentage of businesses have all the money.
I do casual work for a person that serves that sector. It’s 100% self serve for us. We’ll pay fair value for stuff and vendors won’t ever need to interact with us. The problem is when those vendors think their firmware updater is worth a $10 / month subscription. It’s not.
For example with pfSense going closed source we’d be willing to pay around $100 total lifetime cost to put it on PCEngines hardware. We can build that in to the upfront cost of the device. I wouldn’t be shocked if they try for $50-$100 / year which won’t be economically viable for our market, so instead of getting $100 / device and never interacting with us, we’ll end up moving to a different product. I really hope they come up with an offering that’s appealing to the small business sector, but I’m not holding my breath and I’ll be learning opnsense as a contingency.
As a former enthusiast in this area, I need the time for other more pressing interests and have reverted my home network to Eeros pinned to an IQrouter. All of them require some central service to operate, and I rarely if ever have to pay any attention to them. They also provide better coverage and less radio interference than the prior gold standard, Apple Airport devices. The IQ runs some sort of ssh *nix variant and the only time I’ve ever had to call Eero support was to turn off 5GHz for a minute^ to pair a smarthome device.
Still, it’s nice to have a hobby, and if you’re looking for one, run your own, sure! No shame in that. But it’s no longer necessary, and that’s pretty swell to me.
^ I agree with why they don’t make that accessible to end users: because people will uselessly fiddle with settings knobs to feel empowered, knobs like “separate 2.4 and 5 networks” (which breaks roaming and makes users incorrectly blame their WiFi routers when PEBCAK is at fault) that semi-expert users feel qualified to mess with, and lazy technicians will use to create “guest” networks that don’t offer protection and perform miserably due to being locked to 5GHz.
Maybe you and I have different opinions of "enthusiast" in this context. There is really only so much you're going to do on a home network. You set it up and once it's going, it requires very little maintenance. I would not consider running my own network gear a "hobby" any more than I would consider restaining my deck a "hobby". It's largely a one-time project.
I do have requirements beyond what the typical consumer does of their network, like PoE to run a couple of access points, PPPoE so that I can put my modem in bridge mode, the desire to configure extra DNS records, dynamic DNS since my home IP changes. Oh, and let's not forget some filtering/rewriting capabilities so that I can force modern smart TVs to respect the DNS server I provide them.
My network is much more usable having put the time into it. Yes, you could buy some off the shelf thing and get an OK experience, but that wasn't good enough for me.
I used to do all of those things on homebuilt FreeBSD routers for a commercial ISP we built and ran for a few years back in the day, and now I do them on my off-the-shelf router so that I don’t have to maintain the OS or link-shaping, I just click Update Now once in a while and it autoadapts to local congestion.
All of these features are available out of the box and have a GUI intelligent enough to offer a text area for adding filtering/rewriting commands that exceed the GUI’s remit. I used to have to hand-build this. Now I can plug and play it, and end up with the same experience as someone who built their own server and OS, using the same open source components as they would.
Total time invested, 8 hours over 5 years. I’m content with that exchange, and it has come with the only drawback being “it cost money to purchase the router itself”. I could DIY for less expensive in dollars and more expensive in hours. That’s the hobby-or-not choice, as I see it.
I do not decry those who invest time instead. Good, do so! I invested thousands of hours of my life into DIY of this stuff. It was invaluable experience, but it’s no longer mandatory to DIY to get a great experience indistinguishable from DIY.
I'm guessing that they're just not interested in making infrastructure products anymore, only the client devices. Airport is discontinued, all backend/server devices are discontinued.
They do sell mesh wifi products from Eero, Linksys and Netgear on their shop, but I don't think there's going to be any Apple-branded network gear anytime soon.
Check the Openwrt table of hardware[0] for a well supported device, and you're good to go. Seriously, there is no good vendor software in this space, but the consumer hardware can actually work fine with better firmware.
Generic Linux or BSD boxes are ok as routers, but they're not the best switches since they start taking up a lot of space if you need a bunch of NICs.
OpenWRT. Been using that in my home net for the past 12 years or so, on multiple generations of various hardware.
The latest incarnation on linksys ea8500 is slightly bumpy (seems like a kernel crash), but didn’t get annoying enough yet to hook up the serial console and get into kernel bug hunting, yet.
I have about a dozen VLANS that are distributed between different SSIDs and a few L2 switches for wired; bonjour gateway/filtering for the stuff like AirPrint.
Ive seen someone have a fair bit of success with Grandstream AP's. The controller runs on an AP itself or on their router if memory serves me right. I believe they are also moving into the switch market later this year.
Me too, but not really an alternative - the original tomato isn’t even updated any more, and it’s only configurable in its web ui, so it’s really only for home use.
Garbage was a bit of an indulgent word. It certainly is relevant and useful technology. It just isn't useful for home users, at least none that I've ever met.
It is as useful at home as it is anywhere else. Failures just cost less at home.
All my switches are bonded to one another, and it was handy when something snapped one of the fiber runs. That side of the house kept connectivity until the weekend when I could crawl around and run a new cable. (Never did figure out why it broke, though. Guessing the house shifted in just the right way.)
It would have hardly been the end of the world if I had to wait, but if your kit can do it, why would you not?
I mean, sure. If you have the capability and the inclination, go for it. I live in a house that is quite large and I can't come close to fully populating a 24 port switch in a useful way.
I would not detract from your network going the extra mile. I suspect that for most people, the value-to-effort ratio of link aggregation just isn't there in a residential setting.
Look into Mikrotik hardware and OpenWRT. Of the Mikrotik-based hardware I'm familiar with, they support PoE. OpenWRT supports roaming and mesh networks, and is a local solution, as opposed to a cloud-based one. There are no licenses you need to pay for, either.
Mikrotik is amazing, for what you get. But of a learning curve but worth the effort, I've seen large scale wireless networks crossing mountains with their kit.
I setup a small wisp using mikrotik kit for a few neighbours, it worked well in the end, but the learning curve was immense unless you have a strong networking background. I'd setup and used openwrt before for a domestic router and this was another level of complexity to get basically functional compared to that. Thst said the level of customizabilty and scripting (albeit in a weird language) you can do is immense, so for a true power user with a lot of time on their hands, it's a good option
IMO using what we have intelligently is easier. Uniquiti hardware has the Edge line of routers and switches that are not cloud-controlled, not listen on any ports, and not establish any connections on your behalf.
The only routers vulnerable to that exploit were routers that were deliberately configured to be open to the internet, no router with the shipped default config was vulnerable. The vulnerability was patched out in a bugfix release months before the exploit happened, so additionally it was un-updated routers at risk.
That's something entirely different from what happened with Ubiquiti.
True, I bought it because of the 10gb ethernet and youtubers recommending it. I didn't realize it was also a router with a 45 dollar license key.
https://mikrotik.com/software
many people switch not simply for the security/security-theatre, but because they no longer want to support a company with such poor security strategy after it is revealed that they have internal issues.
They all do though. And if they don't, they're all at risk to. The best you can do is make decisions that reduce dependence on them for when they fuck up. That's why I went with the edge router line to begin with. I've already planned for this situation.
like actual cisco-brand ones, or cisco compatible ones?
i checked my order history, it looks like ipolex and 10gtk 1000bT copper modules have had troubles in my mikrotik switches. the mikrotik brand works fine. and every 10G fiber module i've tried has worked (lots of fs.com, and i think 10gtek, and probably some other brand off amazon)
No, TP-Link's Omada controller can be run locally, I do that at home and at my parents' house. It is not cloud-connected unless you turn that on. Runs surprisingly well on a Raspberry Pi 2, actually.
I've got a setup similar to what you're asking for. The TP-Link APs (AC1750, AC1350 and AC1200) support PoE, they're in a wireless mesh, support roaming, and all configuration is handled with one interface, no cloud involved.
Just make sure that what you're ordering says it supports Omada. They still ship a lot of SMB gear that doesn't, but all the basics are there now.
Only been using it for a few months but it's been good. I moved the config I mentioned above (the three APs) to my parents' house and they haven't had any problems. Throughput in their case is a little limited but that's expected with the installation (no ethernet and a lotta walls). Hasn't needed a reboot or anything.
I just started using an EAP660 HD[1] at home a week ago, so far so good. Haven't topped out the speeds yet because nothing in my house can take advantage, but I have some AX200 cards coming. I understand there's a throughput bug at the moment that's going to be solved in a future firmware fix[0], but my clients don't go fast enough to hit that yet. TP-Link seems to very actively update their firmware for the pieces I've been using, FWIW.
So I've been pretty happy with it so far. Roaming has been fine, though in one case I think I had non-optimally located a couple of APs because my Linux laptop kept rapid-fire flapping between two of them. I believe that's a client-side problem, though.
I did try a Cisco 240AC and its wifi performance was rock solid. The management interface is non-cloud, and I believe covers the whole network, but it lives inside the AP itself, which I don't love. The management UI is buggy and they seem slow to push bugfixes, and when I added a 142ACM to extend my network it started going flaky -- I had to do a factory reset/reconfigure of the 240AC to resolve it, then it happened again a few weeks later -- so I'm gonna flip my Cisco stuff on eBay. :-(
[1] Tip if you adopt one of these in Omada: You need to give Omada the EAP660's password (default "admin"/"admin") for it to successfully adopt. The other APs never required a password to adopt, so it was a little confusing until the internet came to the rescue.
I bought 3 EAP330s and TP-Link deprecated them after a year or so. No more firmware upgrades for their (then) top "enterprise" access points. Rumour says they weren't happy with the chipset, so decided to abandon them altogether (just this model, cheaper ones were on different chipsets and support was available for longer). Last time I checked there was no OpenWRT support of any kind. They did hang when I had port aggregation enabled and seemed to run rather hot. But feature-wise and non-trunked-networking-wise they were fine, supported what I was looking for, no cloud, I didn't even use the controller, you can just manage them "the old school" way. But don't count on years of support.
For what it's worth, we've been running about 15 TP-Link EAP225 in a warehouse without any hiccups so far. Most importantly they don't randomly die or lose the controller pairing like some low end Ubiquiti units tried in the past. The only quirk is that on Windows Server you have to configure the service manually, but it's no big deal. [0]
I also have a TP-Link Omada setup. For layer2 networking with switches and AP's it's fine. Cost effective, reasonably stable, acceptable performance and features that are regularly used are all there.
The layer-3 stuff however is still early days and I can't recommend getting the secure gateway at this time. No IPv6 support. Depends strictly on an internet uplink configuration for default route to which all traffic is then NATted. Can't change that. No real security features, no packet inspection etc. The routing features really feel like an alpha version. They are working on it and have a roadmap to a more workable layer-3 solution. So maybe in the future the will be as nice as the Ubiquity solution.
Cloud is not needed but possible. You can get an OC-200 controller for not much money that fills the role of single pane configuration webinterface. The software for that controller can also be downloaded for Linux on PC or ARM if you want to use your own hardware. Also the network keeps running if the controller is down.
If you login to the OC200, it's under settings > cloud access. It should be off by default. Or you can login to the cloud interface and forget the OC200 under actions.
I run a similar setup with a bunch of EAP-225 APs controlled by a local instance of their Omada software (running on x64 rather that on ARM).
I've been very happy with roaming/throughput/reliability generally. The EAP-225 is 2x2, which they don't readily announce. Their newer and more expensive units are available as 4x4. That being said they're so cheap, I've been happy just to throw more onto the network.
For the software to manage them it uses some kind of multicast identification scheme to find new APs. If you're on a different subnet then it won't be able to automatically see them. They have a tool to connect to the AP and give it the management server IP, but that's Windows only.
The other option (that I went for) is just to create a management VLAN (good practice anyway) that the controller and APs live on. This is specifically supported by the APs.
Great without it. The major improvement I noticed with it, is 802.11k & v (faster handoff).
Without those, it takes a little longer for the device to switch APs at the borders of their coverage. Mostly imperceptible, but the longer handoff times can be enough to kill a phone call over iPhone WiFi calling
As a US citizen, I would love for there to be a reasonably-priced US-made alternative. I guess Netgear could be one[0], but their Insight management system is cloud-only, isn't it? Happy to be corrected.
I think I'd rather take an ostensibly-offline controller from China than a cloud-enabled one from the US, though I'm not really happy with those options. :-(
Are there some good options I missed? Would like to hear about them, if there are any.
[0] I expect their hardware is made in China, even if their controller may not be.
It's a sad commentary on how low the bar has been lowered. "No, you're system isn't secure, but the people that can access it can't really do you bodily harm" is not really the level I would hope we are trying to acheive.
I'm not sure what you're calling conspiracy theories since it looks like the GP edited his content, but if you think China is not exfiltrating data from hardware, let me know. I'll provide you with copious references from the recent past. Sure, the US is doing it, too.
I certainly think they do for businesses, but worrying about state actors attacking your home network is kind of pretentious until they actually do it. Are you that special?
The comment was something about how if you get the FBI mad they'll fabricate a drug case against you which somehow involves hacking into your home router or possibly subpoenaing your ISP.
If the favorite color of hat for you happens to be black, then sure, why wouldn't the state actors being looking for you? If you've done some stuff that involved using credit cards that didn't belong to you or any other of a myriad of things on the FBI's list of things you should not do, then they will be looking for you.
And the NSA was known to be intercepting router shipments to international customers, injecting their backdoors, then re-shipping the modified hardware:
I have a Turris Omnia for my main router. It's a solid piece of kit.
The OS, TurrisOS, is based on OpenWRT and for a while they were having trouble keeping up-to-date but that's been sorted in recent releases.
There are great features like auto-updates and BTRFS snapshots and the ability to rollback to previous known good if you screw up a config. I also run LXC containers on it for things like PiHole (not on the internal flash but the main board takes an M.2 SSD).
The Turris MOX is a modular Turris system that you can assemble from the parts that you need.
I have a small Gl.iNet router upstairs flashed with upstream OpenWRT that I use as a WiFi access point and have setup 802.11r for BSSID roaming. Have been using this setup for months and handoff has been completely transparent.
These guys burned me so hard. Something on my Omnia burned out. I offered to pay to have it shipped and fixed and shipped back. They stopped emailing me back. It was a horrible, horrible support experience.
It's a shame that Mikrotik doesn't have a easy to use global GUI.
It's the right hardware, and great firmware and wonderful flexibility - but it needs an easy to use GUI controller to make the simple stuff easy to take over from Ubiquiti.
These recent posts about Ubiquiti have made me look again at MikroTik. Their hardware is more affordable than I had remembered. Is there any good intro to their hardware - there are certainly a lot more options than you get with Ubiquiti.
Even before now there are some limitations with UniFi that have annoyed me. Setting up more complex DNS and firewall rules requires editing the JSON config. IPv6 tunnelling isn’t well supported. The stats in the controller, whilst neat, aren’t very useful because they have to be manually reset to zero.
The benefit of the GUI is that it documents what has been changed: in the GUI there is a list of port forwards.
With the CLI you either need to document it yourself, or you need to know to query if there are any port forwards. That can be a problem if there is more than one person responsible for the network, or if someone else needs to inherit your setup.
Documentation of configuration sometimes isn’t an issue on your own home system because you generally have a high level memory of what changes you made and their purpose. Conversely I still struggle sometimes with Ubuntu because I customise my configuration using command line tools, and I find keeping track of those changes or the implications of those changes is difficult.
Yup, very nice router/switch. If anyone could forward a properly documented configuration to make the Apple AirPort guest network work I'd be ever grateful.
The best intro really is to buy some of their hardware and play around with it. Their routers and APs are all based on the same basic RouterBOARD hardware and run the same RouterOS. The specs for each device is pretty well laid out on their site, but you do have to read through a few product pages to find exactly what you're looking for.
I would start with a hAP ac², a wireless router that is approximately the equivalent of their hEX Ethernet router plus a dual-band AP (cAP/wAP ac). It's a great standalone device and less than $70, or you could get the individual devices for a bit more flexibility.
Avoid the models labeled "lite", those are low-cost versions with lower routing speeds and 2.4GHz WLAN only.
For management you can obviously configure each device separately, or you can use CAPsMAN where one device acts as the controller and handles all configuration. It's not as slick as Ubiquiti, but it works.
I use the edgerouter line for firewalls, and unifi (running on a local "cloud key", with cloud login turned off) for only access-points and some switches.
This news (covering up, legal overriding good security practices) is super concerning though, and I'm definitely going to start looking around as well.
Yea. I only have an edgerouter 4 as far as Ubiquiti equipment goes. It works great for its intended purpose (I needed a dual WAN router and consumer level gear generally doesn't do that). I was eyeing their WAPs, but I believe I'll pass on them now.
Global UI? You mean, AWS-hosted configurator for your network? We just had example of it being security risk. God save Mikrotik from implementing something similar.
That's basically what MikroTik CAPsMAN is, depending on your needs.
I think it's specific to Access Points, so not a general purpose centralized controller for MikroTik equipment, but... centralizing access point management seems to be the main thing under discussion here.
CAPsMAN is a royal PITA to set up. You have to manually add all the wifi channels, map each AP to the channels it'll use, and a lot of busywork. Once it's set up, though, it works fine, and lets you upgrade all devices from the manager, etc.
nothing stopping you from using a local ubiquiti controller though. you aren't tied to their servers if you don't want to use them. that said, they seem pretty problematic from a security standpoint based on these leaks and your networking infra should be rock solid.
Winbox is a really nice remote controller for Mikrotik & vulnerabilities of a shared global controller have just been clearly demonstrated, so I don't see an issue.
Not really. The vulnerabilities of using a vendor hosted cloud controller have been demonstrate, but having one yourself next to your networking decides is just as secure as it always was.
That's how I run it, but it seems they are now pushing ads to local controllers and between this and deprecating recently released devices, I just completely lost trust in them.
Small correction - if you don't have a product that would display stats in a portion of the "single pane of glass" control panel, it displays an ad instead of a "you don't have this product, no data to see here".
Scummy? Sure ... especially if you don't have a Ubiquiti gateway but only AP's so the top part of the page is blocked out, but it's not exactly "pushing ads at me!" in the traditional sense - e.g. they're not targetting ads, they're not collecting data.
Protect still needs cloud to be activated for authentication it seems.
I used to have remote access turned off and accessed the video streams via the iOS app when my phone was on VPN to the local network. That no longer works. Remote access (cloud) needs to be activated in order for the iOS app to work, no matter if you are on the local network or not.
i've run my own controller locally for years without forced cloud login.. i've never used the ios app, what can you do from it that you can't do from the web interface?
He said Protect, which is only part of the newer Gen2 cloudkeys (controller + video surveillance). The app just lets you manage the basic config of your devices and see network stats. There is a separate app for viewing your security cameras via Unifi cloud.
He said Protect, which only comes on the new cloud key gen2 devices and requires a Unifi cloud account. The old stand-alone controller (key or installer) does not unless you tie it to your Unifi cloud account.
I have a Unifi Dream Machine Pro with cloud access turned off-- the setting for it (since the UDM Pro makes all applications accessible via the cloud, not just Unifi Network) is in the device settings rather than the Unifi Network controller settings.
When they introduced callhomes/telemetry sometime in the 5.x code i blocked their known DNS entries and then setup firewall rules to block all internet access outside of the Ubuntu Repos..
As far as I know, TP-Link doesn't require any cloud based service, or even a local controller. They can work fine without any of it and you just manage them locally/directly.
I've never had good luck with TP-Link hardware though. Constant crashes/disconnections once you get past a few devices on the network, mysterious failures, hardware quickly getting dumped into the unsupported list, and so on. I've sworn off of them entirely.
Yep, this is what I do. I used the EAP245 and now the EAP 660 HD. Both were rock solid devices. Managed locally via a web browser. Plugs into a netgear switch, into a pfsense router.
You're conflating "NSA secretly rerouting shipping company deliveries to end-users, installing their firmware, then senting it on" with "Cisco willingly did that".
Cisco was unaware, and once aware (thanks to Snowden), Cisco took steps to try to prevent it, by altering shipping destinations, at the last minute, on route.
We ban accounts that post like this. Please review https://news.ycombinator.com/newsguidelines.html and stick to the rules from now on. We've had to ask you not to post in the flamewar style to HN before, so this is a big deal.
So, while this whitepaper is news to me, how is this an "NSA backdoor".
Reading up on this, it sounds like
* it was required, much as with phone tapping, by the US gov
* ergo, ISPs needed it, were mandated to have it
* therefore, CISCO implemented it
* this protocol was for lawful intercept. Police, FBI, everyone.
While beyond annoying, this is not a back door for the NSA. Nor is it even secret. Before you get all pissy, you should at least state fact as fact. Not exaggerate. Not make it about a specific actor, when it isn't. And not during a whataboutism.
If your goal is to let people know, I assure you, spouting unvarnish, direct truth will help a lot more.
Nowhere is it said this was mandated.
That’s your assumption not supported by evidence.
So let’s run through it.
Cisco writes white paper supporting LE back door access.
LE/IC use hard coded back doors as revealed in the Snowden and Vault7 leaks.
You’re saying it never happened, ever.
Maybe you’re right (you’re not) but you spoke so firmly!
Do you know something I don’t?
In 2005 the FCC ruled that CALEA applies to broadband Internet providers
So yes, it was mandated. You may disagree with the ruling, but ISPs were required to do something, and Cisco enabled this on products for ISPs. Did they have it beforehand? Yes. However, this product only existed on certain products, and other countries required this before the 2005 FCC ruling (again, from IBM white paper).
But of course, this still isn't "Cisco put in back doors for the NSA". This is "Cisco putting in back doors for law enforcement, including even local police".
Further to that, everyone was aware of this. You can't have a 2010 white paper by IBM, before the Snowden leaks even(2013), if it was secret. And realistically, a "back door" isn't quite that, if it is well known. It's just another access point in a product.
Secondly the 'Snowden' leaks, which had everyone quite pissed, including Google (whom I hate, but...) starting the big push for SSL everywhere, were not caused by these specific back doors.
Heck, this white paper is from 2010, and this 'law enforcement' "back door" was well known, AND!, not in all Cisco products! How, then, could Google be surprised by this revelation. That this back door existed?
How could anyone?
It was not a secret. It was not in all products.
No, Cisco routers were infiltrated in two ways. Undisclosed vulnerabilities, which the NSA was aware of, and used against all router vendors to install NSA malware. And again, by intercepting shipments to end-users, installing NSA backdoors and malware, then resealing and shipping the product onward.
This is what the Guardian Snowden leaks talk about!
The big differences between China(and your whataboutism), and the US, is that if you don't let the Chinese government into your company, do precisely what it says, and install all the backdoor software it wants?
You don't have a company any more, your freedom, and maybe even your life.
Meanwhile, the NSA, has been acting illegally, and does NOT have the support of US tech vendors. In fact, US tech vendors are hostile to NSA's attempts to subvert their products, including lobbying US politicians to stop this sort of behaviour.
There is a vast difference between these two things, and in all of the above, Cisco did not willingly put "back doors" in anything for the NSA.
So in reasons to your question? Yes, I know something you don't.
History. Factual, actual, history. Not revisionist.
I'm happy to re-examine any of this, if you can provide links to data showing Cisco allowing NSA agents into its midst, and installing NSA spyware for its products at the factory. On purpose. Which aren't open, and were hidden from everyone.
Or something similar to this.
Because otherwise, your statement is absolutely, positively, not factual. How can I say otherwise?
And yes my original response was firm, because I've seen others say this sort of thing. We must be factual in our claims, not hyperbolic!
What IBM white paper?
Show me the law where this was mandated.
Because no, you are in fact misrepresenting the truth.
So, your I agree with you in not being hyperbolic.
However, let’s just say I have exceedingly applicable industry experience. (IC and LE)
I know beyond a shadow of a doubt that I’m right.
So now my burden is finding what I can in the public domain to share this truth with you without violating NDAs.
Btw, with respect to your 'show me the law', 'mandated' doesn't mean 'legislated'.
That very same IBM whitepaper you cited, claims the FCC mandated it. As in, pushed an interpretation of a regulation. Are you claiming the whitepaper is wrong?
The whitepaper which you used to validate your claims?
Or, are only the parts of it which you agree with correct?
As far as the white paper, I mixed up Cisco and IBM in my head on that.
As far as “mandated”, laws and policy mandating back door access have been shot down repeatedly in the real world.
The claim of an FCC mandate in a white paper does not indicate legality of deployment in the real world is what I mean.
TPLink newer stuff wasn't supported and wasn't going to be DD-WRT for a while there so check first. They have a crypto blob for the radio binary, or the entire firmware system they the group would need to trust blind and not be able to adjust settings with, or violate the DMCA to reverse engineer.
Don't know if this is the same case still or not, but they did this for FCC compliance around the time 802.11ac was launching. That might have changed that though I'm not sure, I stopped considering them at that time.
Also a good company to look at would be Microtek, I have heard good things, but haven't looked into them directly.
Mikrotik, but unfortunately getting reasonable throughput for wireless clients is a serious challenge (I always have better results with openwrt on the same hardware). Still, nice to have local control and not have to rely on some cloud service just to use the hardware I bought.
Using 80Mhz channels I found the default configuration never exceeded 200Mbit/s using iperf. For me "reasonable" is closer to 800Mbit/s, which is roughly the theoretical limit for 80Mhz with 2 spatial streams. I run my tests with my devices sitting 1 meter from the AP. This is on a hAP AC, and like I said, I get much better performance (close to the theoretical max) running OpenWRT on the same unit. I have had similar issues with the RB4011 and cAP AC, and in both the NYC area and suburban Virginia (so it is not just an issue of spectrum crowding in the city).
Yeah, that sounds a bit slow. I suggest checking if faspath and fasttrack is working.
I remember that when I had hAP AC using firewall rules inside lan, it also did not go much faster. Good indication was CPU usage. If it used 100% CPU at ~200Mbit/s then it was firewall slowing things down.
> Does anyone have a decent WAP where I can use PoE
There are PoE devices with OpenWRT support[1] and should be possible to enable 802.11r if they have the support. They can be managed locally even with self-signed certificate.
I use OpenWRT now and would really rather avoid it. I want a central controller, not having every AP have its own UI. Plus firmware updates area always an adventure.
To somewhat eliminate the chances of adventure, I’ve profiled the setup for each of my many OpenWRT devices and created unique profiles for them in a (reasonably) simple Git repo[1].
All I need to do to get device-specific firmware is to update the OpenWRT version-number in a single makefile and the rest happens automatically.
I’ve even setup Github Actions to build the firmware for me (basically, run make), so I can even get/build new firmware from my phone.
I’ve yet to have any issues when flashing these builds. It used to be much worse when flashing the regular “official” OpenWRT image and restoring packages afterwards.
Couldn’t be simpler! (With the regular Linuxy you-have-to-build-it-yourself-first clause)
About 5 years ago I would do the same thing. I want to set it up such that if I with the lotto and move away, the rest of my household can continue using the system without having to learn a CLI.
I don't know about you, but I "automate the old-fashioned way" at my day job, I want the damned thing to just work without me bothering with "SSH access and CLI tools" at home.
For those people here saying "go Ruckus unleashed" ... caveat emptor my friends !
I have it on very good authority that Ruckus have started rolling out a change in their pricing model to require a Unleashed license per AP to operate, a move which obviously increases costs to the end-user.
Some people might say its a deliberate move prevent cannibalisation of their main business model by nudging people away from Unleashed. I couldn't possibly comment.
My earlier comment was based on a change of policy which happened around 1st March, and any Unleashed quotes as of 1st March (and the two-weeks prior) need to be re-quoted for the new "license per AP" Unleashed model.
I've been a bit busy with other work since that bombshell dropped, but if I get a moment I'll try to dig up some pricing.
The other thing to note is feature discrepancy between Unleashed and standard. Perhaps of most interest to your average HN contributor was (the last time I checked) IPv6 was not supported on Unleashed firmware, and not much sense of urgency (if any !) to rectify that.
Thanks! I completely glossed over the IPv6 thing... At home I don't get native IPv6 from my ISP, so I just tend to forget about that. Although it would be neat.
For me I bought my AP on eBay and just plopped the standalone Unleashed firmware on it and that's all seemed fine. In what I see there's nothing changing? But it sounds like you're running a /much/ larger install.
Actually (and ironically given the context of this thread !) the reason I found out about the policy change was because I was helping someone out who was looking to dump their Ubiquiti kit and realistically it looked like Ruckus was going to be the only sensible option (despite the already unpalatable price premium before the new policy).
As you may or may not be aware, Ruckus have an "all quoted" policy, there is no price list per-se.
At the time I was working on the project (late 2020) Ruckus did have a promotional activity going on where you could buy Unleashed kits at fixed prices without quoting.
However due to various technical questions that were coming up (e.g. IPv6 support) we missed the window and it was uncertain if Ruckus were going to extend the promotion.
Ruckus did extend the promotion, at least initially (Jan-Feb 21') but then they switched to the "license per AP for Unleashed" and the promotion was killed off.
It was at at that point that my friend took the hint and dumped the idea of Ruckus and I went back to my normal work.
If I get a chance I'll try to find out what happens about second-hand kit. My guess would be that if you stay on old firmware there's not much they can do about it. Although whether its desirable or advisable to stay on old firmware is another question, obviously.
Without going into detail because, well, you never know who's reading ....
TL;DR "WatchDog End User Support" is now mandatory for Unleashed and is sold and priced on a per AP per year basis.
The pricing is not too scary (two digit figure per AP per year). But I'm told the requirement is (will be ?) enforced so its unlikely to be a case of being sneaky and paying the first year and "forgetting" to pay the renewal.
I'm a big fan of flashing OpenWRT on supported APs. You lose central management and setup takes time, but I'm very happy with the stability and no worries about cloud services or vendor lock-in etc.
I bought an R610 AP on eBay a few months back, flashed it with the Ruckus firmware (legally available to all from their site), and it does exactly what you want. On-prem only, no cloud, one of the APs will act as a controller/manager for the others, and they can all communicate via wired or meshing off of each other. One of them can even be a NAT thing if you want.
I think I paid around $160 because someone had a bunch of off-lease ones. But if you look up anything that supports the Unleashed firmware you'll be good. 802.1ax is the hotness right now, so the slightly older (but still work great) ones are a LOT cheaper.
I replaced a Ubiquiti setup with a Ruckus R610 and small fanless running OPNsense (Protectli) with a basic switch and POE injector and it's excellent. Sure, it's not single pane of glass for it all, but the AP is rock solid and OPNsense is a solid known quantity. I've got no regrets.
Same here, I ditched my Ubiquiti and went with Ruckus and I could not be happier. I'm just so sorry that I ever bought into Ubiquiti's marketing when I purchased their AP. The Ruckus performs so much better and the mgmt software is light years better than Ubiquiti. I also run a Protectli but on OpenBSD (from pfsense originally).
Get Linux boards and USB-3 WiFi dongles with well-supported chipsets and roll your own?
The other alternative is to go way up-market and buy industrial gear. Consumer gear is shit due to a race to the bottom mentality. 90% of consumers buy the cheapest. This is also what turned every TV and appliance into a feature-encrusted shitbox full of spyware.
> Does anyone have a decent WAP where I can use PoE, deploy like 5 of them and have them support roaming between APs, all managed locally? Is that too much to ask?
Not as comprehensive as Ubiquiti’s management interface but the CAPsMAN feature on Mikrotik routers and APs does cover this use case.
Look on ebay for slightly older models. R710, R720 should be $200-$300. Not a replacement at scale, but the one-off purchase from ebay is fine for home use.
Unfortunately, w/o firmware updates they are just little better than a brick. Especially for WIFI hardware where you cannot control who can access it - better keep your APs patched.
Aruba doesn't require a cloud controller, that's just the "Instant On" version.
I used to run Aruba Instant (not the "instant on", no controller), but gave those APs to a friend and now run an Aruba 7005 controller with 2x303H and a 324.
Support/Licensing costs are totally worth it for having trouble-free WiFi with no cloud dependencies (context: using and supported UniFi in various roles since the first UAP came out, and I think was free for UWC attendees, though I could be confusing that with their first camera), but am network nerd that's comfortable with enterprise wifi.
Edit: I got upvoted by somebody, but as an UI user I'm genuinely looking for an answer. If it's still possible to get inside if devices aren't connected to UIs cloud.
1. They are now pushing ads to their local controllers. That is a shady tactic. It also means the controller is phoning home. It means they might have an XSS in that code now or in the future.
2. They just deprecated a bunch of relatively new hardware. If I’m going to invest a non-trivial amount into their hardware I want to know it’ll keep working for a long time.
3. They lost trust due to this breach. How can I trust their code to secure my locks network if they can’t secure their own?
Also add that all of the SOHO equipment is garbage that drops connections randomly, crashes, or simply can't deal with some WiFi chips.
This is the reason I went with the Ubiquity UniFi 6 years ago. It was the only one I tried that didn't constantly drop connections or cost a fortune. But it's only G and I've been considering an upgrade, but there are no good options on the market that don't have stupid cloud management bullshit, are built on garbage hardware, or cost an arm and a leg.
Other than ubiquiti I assume you mean? Not that I know of. I want the old ubiquiti back where customers, not stock price and ad revenue, was the focus.
The TP-link offering looks very similar to Ubiquiti from a quick scan a month or two back.
Both will run from locally hosted controllers if desired.
I've been seeing more Cisco "Meraki Go" kit around as well, which looks to target the same use cases as Ubiquiti (very very similar gear, WAPs, low end switches & gateways), albeit without a local controller option, but at least without the usual steep Meraki subscription charges.
I know someone that works there and they seem pretty happy with the place and product. just saw the amazon link now though so that may be a detriment depending on your view of them. (I have never used their systems or anything so it's not really an endorsement but something to consider)
Not 100% sure if that's what you are looking for (I don't do much network works) but I think that Camsat's GlobalCAM-4.5G may be worth checking, with one catch: the company targets CCTV market. Still, that's just a router, without any special license fees or mandatory clouds.
Peplink seems pretty good; they do have a Cloud:tm: management offering called InControl2 but as far as I'm aware it's entirely optional. I've had good luck configuring everything via the local UI. My setup is a Balance Two + a few One AX APs.
Sure plenty of solutions out there, but its all going to be Enterprise priced. $600-$700 an AP, plus whatever is going to be the controller. In this space, you'll find cloud based options, controller based options, and standalone.
If you are willing to go this price range, I think FortiAPs feeding back to a Fortigate FW is rock solid solution. But a FortiAP-431F is $616. And a base FG60F as controller is $535 + service if you need it. And although you probably won't need repair options, support/maintenance is a yearly fee ontop of that.
Ubiquity was definately a unique company offering many of the enterprise features for consumer pricing.
I realize I'm a bit late to the party, but GL-iNet does this. They run OpenWRT, too! PoE support can be hit or miss, but being able to truly own my devices without compromising on features is amazing.
You probably want something like [0], which has PoE support and an optional Cloud connection. You can roll your own automation with (e.g.) SSH access since they are just Linux machines.
You're in the boat of deploying OpenWRT or similar low-cost APs presenting the same SSID on a shared VLAN, plugging them into your favorite PoE switch, and manually configuring their channel strengths, etc.
It isn't so bad if it's a one-and-done thing, but all of the out-of-the-box solutions are very IoT.
Enterprise solutions with your self-contained WLAN controller and APs (not including PoE switches) are typically pretty pricey (>$5k, can spend a lot more).
You can absolutely manage ubiquiti local. Even with a ridiculously named local appliance called a cloud key. Their cameras are unfortunately another story.
Is pfSense, vyos, stuff like that out of fashion? Or too hard to maintain? Automating that stuff with ansible should solve the central management bit...
Yeah, of course you can. It's just a freebsd with some configuration stuff on top, it can run hostap, switch, it can do lagg and span ports and all the other stuff you'd expect... not sure how common it is though
I bought some Ubiquiti gear a year ago (a pair of AC-AP Pros), and immediately after I got them I reflashed them with OpenWRT. Haven't had even one issue with them.
I get that people with larger networks would find centralized management useful, but I'm fine just managing a couple APs, a router, and a couple switches on their own. They're pretty much set-it-and-forget-it devices anyway.
Agree about TP-Link. I bought some Deco mesh kit for the house and am generally pleased with its performance. However the fact that I can’t configure them locally is a massive turn-off from buying the stuff in the future.
I use the TPLink forums to put local management in as a feature request. Perhaps if enough people make a noise?
Unifi cloud controller is optional, but they don't make it easy to figure that out.
Setting up a UDM first thing I did was add a local super admin account, then disable remote access. That way, if their cloud auth servers are down I'm not affected as I use the local admin account.
Maybe Plume Homepass: https://www.plume.com/homepass/ ? I'm not sure if they're 100% equivalent, but it seems to cover a good part of the Ubiquiti feature.
Interesting. Subscription-based services in the home seem like a disaster waiting to happen. Unless you can self host in the event of a company shut-down, you're beholden to a company and their solvency.
Can't see anything on their website for a transition plan in the event of shutdown (and of course, why would they post that and potentially signal lack of confidence in their longevity).
Ruckus seems pretty good. You can use their unleashed APs without cloud/controller/subscription. POE, and can connect up to 75 devices. I just installed at my hotel.
We had ubiquiti, but the power outage usually corrupts the controller, and requires constant resetting.
I have exactly this setup with three Aruba Instant APs (WiFi 5), but afaict they’ve combined the Instant product line with their cloud offering or something? I’m not entirely sure where they’re going with it, but I am very happy with the setup I have.
maybe their different product lines are managed differently, but all my Unifi WAPs, router, and switches are managed on a local controller that i installed and maintain myself.
i recall some features being locked behind a UBNT account, but that was only reporting-type stuff IIRC
you can build one but PoE might not be in the cards unless you want to convert the injected power back to a 5v barrel.
Alix makes a decent router board that can host Linux and dual PCI cards means 5 and 2.4 ghz AP's. the total would be ~200 for each "AP" but they would be pretty massively powerful.
That's awfully convenient for the company offering those products, but I want to control what happens on my network, even if that's inconvenient for some hardware vendor.
Case studies, focus groups, surveys and interviews are great ways to find the unknown unknowns. Of course, you need to pay people to participate in them, and then you need to pay expensive employees to conduct, collect and analyze the results.
It's often just cheaper to spy on customers, though, and pretend that there is no other possible way to conduct business.
> Case studies, focus groups, surveys and interviews are great ways to find the unknown unknowns. Of course, you need to pay people to participate in them, and then you need to pay expensive employees to conduct, collect and analyze the results
No they're not, because the vast majority of people simply won't be bothered, and most people probably aren't as reliable as concrete data.
People will be bothered if you pay them. DigitalOcean does this with focus groups for developers, and offers $500+ each for an hour or two of developers' time.
I was thinking of those as thing you do before product release (so they're "known"). But it's not a good way to find out about reliability issues, because those only happen in especially weird situations, or over time like running out of disk space.
Telemetry that tells you which features are popular is useful but does need filtering to avoid identifying individual users. But sending back errors and crashes is what's really important.
You can do things like have feedback forms but typically users don't like sending that in because they feel like they're doing work for free.
I have lots of devices that don’t phone home. Have been working for years. The company needing to know which websites I visit to make my network function does not speak well of the company.
PoE is probably Power over Ethernet. With that you don’t have to worry about laying down electrical line to power the APs. The APs draw power from the Ethernet line itself
Mikrotik is nice and does all of those things. Just needs actual expertise at network administration to set up. Once done though, it's fire and forget.
If you don't feel like configuring hostapd and dnsmasq I'm pretty sure there's an nmcli one-liner that will have network manager run a WAP for you. I use 'hotspot' on my phone all the time.
> Maybe putting your network control plane in 'the cloud' isn't such a good idea after all...
Isn't one of the major selling points of cloud-everything "How can you possibly secure your service better than BigRespectableCompany?" I know any time I bring up self-hosting E-mail or a web site or whatever, someone always comes out of the woodwork to remind me that I am not an expert in securing Internet services, and that BigRespectableCompanies have full-time employees dedicated to security. Surely I should be moving to the cloud for this expertise! This is sounding more and more like FUD to me.
Managed services with state of the art IAM policies are more secure than lifting and shifting a Linux box running whatever PAM configuring was setup on it in 2005.
Ubiquiti really aren't in the same ballpark as AWS or Microsoft, which are the companies people use that argument for, and you can bet your ass their security is better than in most places.
This is a fallacy. Just because these companies have great security teams doesn’t mean that things don’t fall through the cracks. Shit slips past the security team in product meetings all the time.
The claim wasn't that they never have security flaws, the claim was that they almost certainly have fewer security flaws than the alternative self-hosted solution someone named MastodonFan87 comes up with.
You may be smart, and have secured your systems properly, but someone with the same resume as you in another company might not be.
As your manager, how can I tell the difference between someone who actually did the work right, and someone who said they did the work right (and also legitimately believes that they did)?
You never can be... but you should already know that being a manager. But if you're the target of an advanced persistent threat. It doesn't matter how good your guys is, they'll win eventually when the next 0day no one knew about shows up. But then your cloud provider will have been broken into dozens of times already. Hundreds of companies have to do a security audit of all of their networks now* because Ubnt got, got. The only ones who don't are idiots, or not using ubnt et al.
So what, you are suggesting a strategy of staying away from large services and hoping that you won't be targeted?
I posit that it doesn't take burning a zero day, or a coordinated effort by the CIA, the FSB, and Randy Waterhouse to break the typical DIY self-hosted security implementation. (And that the manager paying someone to build it has no ability to tell between a great, a good and a bad DIY job.)
A network controller for local WiFi shouldn’t be reachable from the Internet at all. I’ll take a vulnerability ridden controller on an isolated management VLAN over cloud shit any day.
It's odd how the big cloud vendors have been able to escape criticism for being completely open by default. Other vendors have been taken to task and have adopted better security practices. For example, SuperMicro IPMI comes with a random password now.
It's extremely difficult to lock down an AWS account when there are a bajillion services, IAM policies, roles, etc.. I've been trying for the last few days and it's so difficult that I can understand things like this. I don't think it's acceptable, but I can see how it happens.
I think the expectation for AWS, Azure, GCP, etc. needs to change. Accounts should allow nothing by default and part of the tutorial / learning process should be understanding the permissions needed for each service and how to limit access to those services. As a bonus, they should show you how to configure Budget Actions to catch anomalies and runaway services. For example, I'm trying to set up my account so SMTP access to SES gets revoked for SMTP users if the message count exceeds a certain threshold. It's really, really hard because there's not a single document / guide that shows the process from start to finish.
The triangle says Confidentiality, Availability, Integrity.
While your concerns are 100% valid, we need to remember too that setting up access in restricted ways and inviting users to understand the protection and remove the correct barriers, or implement the concerns necessary to interact with those for themselves, always runs the risk that some users will find your protections cumbersome and instead find a (totally incorrect) way to baffle them, or otherwise even route around them entirely mooting any efforts to secure a platform.
And every time I hear this played out in conversation, the answer is "that's on them!" But it's clearly a balancing act, it's a trade off; tautologically, when you make the service less accessible then... it is, well, ... made less accessible.
Besides facilitation of the secure access also sales conversion ratios will depend on that accessibility. The crux of your argument stands, the defaults are too open, and we need to do more to ensure that naive users aren't handed a loaded gun to aim at their own feet.
Uhm.. in the AWS i've used, it's on explicit allow, and all of their docs and tutorials start with IAM and what's needed and why. What more do you want? I can't imagine IAM being simpler while being as granular as it is. You just have to actually take the time to learn about it, like every system. It's still drastically easier to use it securely than doing something on a similar scale and detail manually.
The hard part for me is figuring out how to disable access without breaking everything. I know it’ll be useful once I understand and I’ll take the time I need to learn it, but most people won’t.
I prefer the opposite learning direction. Start closed and open the 1 or 2 things I need instead of having to understand 1000 things immediately to configure permissions reasonably.
Have you tried Access Advisor in AWS IAM? It’s been out for a few years now and is specifically targeted at using “... last accessed information to refine your policies and allow access to only the services and actions that your entities use.”
Can you explain how IAM doesn’t work well with the “starting closed” approach? IAM authorization is “default deny” and every principal needs an explicit allow statement with the appropriate action before authorization will pass.
> Can you explain how IAM doesn’t work well with the “starting closed” approach?
It works ok once you do a lot of learning and read the best practices. I think a lot of people will skip that and use their root account for everything.
The biggest mistake I made was creating an admin user, but giving it too many permissions and using it like a normal user.
After learning more I use the root account to make an admin account, but I think the admin account should only use IAM to create other fine grained users.
So it works fine, but I think it would be better to force people into creating those first couple of accounts with permissions chosen by experts. It’s too easy to jump right in and start using an over privileged account.
You can use AWS Accounts like microservices. The biggest security walls in AWS are the account barriers. Those have to be specifically configured to cross. Sometimes (1%) its unavoidable, but if you have multiple services running on an account, you force yourself to weave arcane webs of IAM permissions crisscrossing all over to get what you need where. It's a terrible model that people inflict on themselves because it's how everything used to work.
Spinning up your own DB instance is also "open by default" and takes both effort and expertise to secure properly. I think it's pretty reasonable that there's a large surface area of IAM permissions when AWS offers a vast number of disparate services.
>If this is true, and whoever breached them had full access to their AWS account, can we really trust them to clean up all their tokens and fully eradicate all forms of persistence the hackers may have gotten?
This is the same for any breach. At least if you're using AWS, you know that your management tools aren't lying to you (as long as you assume AWS itself isn't hacked) and you can use those tools to cleanup. If you run your own machines, you can't assume your management tools work correctly. All your machines could have rootkits, all your tools could contain backdoors, and every attempt to cleanup might just be a fake veneer. See Reflections on Trusting Trust.
Full disclosure I work for a cloud computing company (but not AWS).
> can we really trust them to clean up all their tokens and fully eradicate all forms of persistence the hackers may have gotten?
The state of security in the tech industry is miserable. The only companies we should trust not to leak our data are those that never collected it in the first place.
We are certainly not having this conversation enough. I regularly chat with a risk office and she keeps telling me: Data minimization is your first line of defense.
Heck, most operating systems are leaky by default.
Even openBSD, which has a stellar trackrecord in terms of security and "goes against the grain" on many decisions for the sake of secure by default (for instance, disabling hyperthreading altogether to prevent any kind of SPECTRE vulnerability) is under constant scrutiny for not being secure enough.
Maybe connecting everything to a network and making it a high value target by collecting everyone's data is just a terrible idea in the long run.
I haven't got much sources for you but what I've picked up over the years: a lot of OpenBSD's security is just old fashioned manual code review and audits, and there are not enough eyeballs. Someone like Ilja van Sprundel can go in the source code and find a bunch of issues without too much trouble [1]. I don't see any concentrated efforts to improve the status quo (where's formal methods, where's automated fuzzing, where are initiatives to employ more safe programming languages, static analysis, etc.). And while OpenBSD pride themselves on their mitigations, they aren't exactly state of the art and some of the more recent stuff (like trying to eliminate ROP gadgets) seems just futile. The biggest thing OpenBSD did with mitigations was enabling them by default for the base system and ports. What does anyone remember OpenBSD for in 2010-2020? Pledge, probably. That's a nice thing but more for containing the damage than actually making stuff secure in the first place.
My concern (and the concern of many others, I think) is that if OpenBSD suddenly got enough attention from the wider security community, including people who actively look for holes that can be exploited, there'd be plenty of important stuff found. Until then, these issues sit quietly waiting for a malicious party to discover them. There's quite some fanfare for OpenBSD, but how many of you are actively auditing the code? I'm subscribed to cvs@ and tech@ and I read them daily and I just don't see much contribution at all from outsiders. And when I do see it, it's mostly stuff like fixing typos or amending man pages. All the commits that change code with security implications tend to come from the core developers, and are reviewed by a handful of people at best. And I have seen some obviously broken stuff slip through.
> if OpenBSD suddenly got enough attention from the wider security community, including people who actively look for holes that can be exploited, there'd be plenty of important stuff found.
This seems like a structural advantage to less popular software. If your software is less common, attackers will have put less time into exploiting it, and therefore you will be more secure. My impression is that MacOS and Linux both benefited from this relative to Windows for a long time.
In general this should be true if usage grows faster than security resources for popular system. It might be still be true even with significant, commensurate investments in security while you grow, because if a small percentage of users mis-configure the software and create vulnerabilities, that population will hit a critical mass with growth regardless of your security efforts.
Man I really wonder why the lack of proper 2FA is so wide spread?
Is it rally cost and complexity?
Or just missing awareness?
Or the lack of consequences when you get hacked in a way which could easily have been prevented (through then they might have attacked in a different way, tbh.).
It's people not getting it and being plain annoyed by the second factor. YubiKey or Authenticator app on a different device... it's too inconvenient and people often only do it if forced (e.g. banks do this afaik).
Every day I sit at the same desk, at the same computer, logging into the same websites, using 2FA over and over and over and over while sites time out "for my protection". It's a plague. Write a damn desktop app I can run locally, I didn't ask for people from Turkmenistan to be able to login as me, so you could sell me a halfassed web version of something.
Joseph Heller predicted 2FA in Catch 22 when he wrote:
"Almost overnight the Glorious Loyalty Oath Crusade was in full
flower, and Captain Black was enraptured to discover himself
spearheading it. He had really hit on something. All the enlisted men
and officers on combat duty had to sign a loyalty oath to get their map
cases from the intelligence tent, a second loyalty oath to receive their
flak suits and parachutes from the parachute tent, a third loyalty oath
for Lieutenant Balkington, the motor vehicle officer, to be allowed to
ride from the squadron to the airfield in one of the trucks.
Every time they turned around there was another loyalty oath to be signed. They
signed a loyalty oath to get their pay from the finance officer, to
obtain their PX supplies, to have their hair cut by the Italian barbers.
To Captain Black, every officer who supported his Glorious Loyalty
Oath Crusade was a competitor, and he planned and plotted twentyfour
hours a day to keep one step ahead. He would stand second to
none in his devotion to country. When other officers had followed his
urging and introduced loyalty oaths of their own, he went them one
better by making every son of a bitch who came to his intelligence
tent sign two loyalty oaths, then three, then four;"
Notice how 2FA turns into MFA? Keep adding FA until you're as secure as the security theater demands.
"To anyone who
questioned the effectiveness of the loyalty oaths, he replied that
people who really did owe allegiance to their country would be proud
to pledge it as often as he forced them to. The more 2factor logins
a person went through in a working day, the more secure he was;
to Captain Black it was as simple as that"
"Captain Piltchard and Captain Wren
were both too timid to raise any outcry against Captain Black, who
scrupulously enforced each day the doctrine of 'Continual
Reaffirmation' that he had originated, a doctrine designed to
trap all those men who had become insecure since the last time they
passed a 2factor authentication prompt a few minutes earlier."
Honestly Windows does this right with AD, Kerberos, Spnego
You login to a physical machine with a password (the machine is trusted on the network via AD so physical access is one factor and password is a second)
You visit websites and they use SPNEGO to land on Kerberos or NTLM auth which then bootstraps off the fact you're already authenticated to Windows. You never even need to see a login page
It's achievable with macOS and Linux but afaik there's some more configuration to be done. The only place I saw with a setup like that was a bank and it was part of a new technology stack that almost nothing used yet
With that setup there's almost nothing to phish if you can train people to only enter their password into the OS at login. You can pretty much eliminate the possibility of credential sharing but locking logins to certain machines
> Write a damn desktop app I can run locally, I didn't ask for people from Turkmenistan to be able to login as me, so you could sell me a halfassed web version of something.
He could have had 2fa on his console account but saved an access key for CLI access. Many large organizations have an infrastructure where you exchange your corporate authentication (including 2FA) for a short lived AWS access key, but AFAIK this isn’t out of the box.
This seems incredibly clunky and most people are probably not doing something that involves typing the ARN of their MFA device on a day to day basis. To be tenable on a daily basis you need something like “aws login” with username, password, and code that sets up your credentials file correctly. Expect people to copy and paste values around, and you’ve already lost.
Not to mention legacy code that only knows about access key ID and secret, and doesn’t have a place to even put a token.
> Man I really wonder why the lack of proper 2FA is so wide spread?
Because it's a giant PITA unless you have a dedicated team managing it. And the service companies get this and charge accordingly (aka enterprise levels).
It's why companies like 0Auth get bought for gigabucks.
After the Unifi Video fiasco, I bought a UDM Pro to test Unifi Protect.
Once I saw it required cloud login I got scared.
After I saw an ubiquiti ssh key preinstalled in a device with unfeteted internet access I shut it down to never bring it up again
There was no option to bypass cloud login when it got to my hands, apparently that has been "fixed" with some update, but if you buy a device and it comes with an outdated firmware, as it tends to be the case with their cameras and APs, your only choice is activate on cloud, setup, update, factory reset, setup on local.
About 2... I guess when you got access to all their source and infra is just a matter of pushing an update to enable ssh and they don't even need to even push a key. My problem with the keys is that they come bundled with it and you don't know it. There's no reason for them to install a key in there without your consent. Imagine Microsoft presetting an Administrator account on every Windows Server without telling anyone... It's just a security problem, even more in a firewall
> Maybe putting your network control plane in 'the cloud' isn't such a good idea after all...
Sure it isnt. It is extremely bad idea and actually something like the ubiquiti breach is not even strange to me, once you have worked once in "enterprise(tm)" world this doesnt seem like anything strange.
There is just no way to buy a router that communicates with 3rd party servers and to let it access the LAN is a complete no-go (even if I am paying ISP router as a part of the package it is running as bridge just to pass the connection to my router).
I consider router as a first line of defense for inbound traffic and last line of defense for outbound and there is just no way to trust some fishy corporation for this.
And if the corporation is actually promoting cloud access, like Ubiquiti or Google, they are pretty much banned from my shopping list for all times.
The breaches are common, the reporting/discovery of them is not. Security just isn’t a priority for a lot of Orgs, as the consequences are minimal (see: Equifax) due to a lack of regulatory or financial penalty pain when a breach occurs.
“Help yourself to a free year of identify theft insurance” and all that jazz.
This is correct. Worked for a fairly large corp with lots of customer data and while I haven't witnessed breaches of said data it's pretty much a matter of time.
Me and my colleagues always pushed for more secure setups and configs but the common rebuttal was "no need there's a keycloak running several layers above and you need to use a VPN and need access to AWS first, go implement features instead."
I hope for them that no rogue employee decides to play around a bit or that no one stores their credentials in some cloud LastPass account with a '123456qwerty' master password.
Yes, if they destroy all of their backups, all of their hardware and every one of their current AWS accounts. Then start entirely from scratch. Any measure falling short of that (and let's be reasonable, it definitely will) means that they're entirely untrustworthy from now on.
Of course having your home network controlled from the cloud should already have been entirely untrustworthy, so in practice it won't be an issue for their sales.
There is Fortinet(which acquired Meru 5 years ago). Meru was pretty OK. I helped manage a setup of 2500 + access points on a campus.
I left that job 6 months after Meru was acquired so I cant say how they are now.
Got 3 no brainer CVEs against them. We're an enterprise customer who is now moving away because after Fortinet acquired them support dropped off a cliff. They had some good people but it bacame rather apparent that there was a bit of a toxic culture there.
When you're operating such massive services, at minimum you should protect the admin accounts not just with 2FA, but also with IP firewall. Looks like both were missing from here ...
This isn't really true. If you have an AWS, you need a global god admin. That's the root user. As an IT guy, I have to store those creds somewhere. So I make the password super long and random, store it in lastpass, add 2fa, and add alerting for all logins. It's never used except in the super rare case we have to do something that requires the megagod level privs of the root account (like changing billing to a master account etc)
> “They were able to get cryptographic secrets for single sign-on cookies and remote access, full source code control contents, and signing keys exfiltration,”
Maybe putting your network control plane in 'the cloud' isn't such a good idea after all...
Edit: Just re-read the article, this part stood out:
> the attacker(s) had access to privileged credentials that were previously stored in the LastPass account of a Ubiquiti IT employee, and gained root administrator access to all Ubiquiti AWS accounts, including all S3 data buckets, all application logs, all databases, all user database credentials, and secrets required to forge single sign-on (SSO) cookies.
> Adam says Ubiquiti’s security team picked up signals in late December 2020 that someone with administrative access had set up several Linux virtual machines that weren’t accounted for.
If this is true, and whoever breached them had full access to their AWS account, can we really trust them to clean up all their tokens and fully eradicate all forms of persistence the hackers may have gotten?