I got the iPhone 13 mini as a work phone for the sole reason of it being the smallest iPhone at the time. I too dislike the phone landscape nowadays with their ridiculous and ever increasing sizes.
My personal device is a Motorola Razr 50 Ultra, which I got because while it's huge when flipped, it's portable enough when it's closed. I can have it in my pocket without it falling out.. without it being annoying while i put on shoes etc...
I use its cover screen a fair amount too, to avoid having to flip it open, which is also why I got the ultra rather than the slightly smaller version.
The iPhone 13 Pro -> 16 Pro upsizing is ridiculous. The 13 was just the right size, but now they had to change it so they could sell more cases. It's almost phablet-size now. Look at an iPhone 6S by comparison.
The iPhone 13 pro is 71.5mm x 146.7mm x 7.65mm [1] and the iPhone 16 pro is 71.5mm x 149.3mm x 8.25mm [2].
While it did get a tiny bit bigger I wouldn’t have noticed this u less you would look up the spec, especially as it got lighter from 204g [1] to 199g [2] at the same time.
This is an “unpleasable customer” problem. When the 13 Pro was current, everyone was yelling at Apple that it was too thin and that they wanted a slightly thicker phone with more battery life, which is what Apple did.
Not really. The iPhone 13 Pro non-Max was fine. If people want bigger, they can get a Max phablet. It's the 6 and 6 Plus especially that had a bendgate. They're trying too hard to make them too thin, sometimes. I'd be fine with a 5-7 mm thicker. Apple should do what's it's best at... not listening to frivolous criticism.
It's not my hill to die on but I will say use wireless in-ear monitors myself to avoid ever having to deal with adapters because... Adapters are terrible, often wonky in one way or another, incredibly inconvenient for anything but having them lie on a desk. It's also something you easily forget to carry around, or you lose or break because of shoddy build quality.
It's a bad alternative to something that wasn't a problem except it took up space and people still talk about it because there's still a need for something better
"Study smarter" or just make it easier to cheat... you be the judge of that. Maybe we should just go back to doing tests and exams on paper? What was the benefit of going digital with this again?
The first mistake the developer made, was that he wanted to create a different user experience between keyboard and mouse. Stick to what you get by default and design your components so they work for both usecases. Don't try to be smart when it comes to accessibility.
What he ended up doing is what I would have considered a hack. A solution that inevitably breaks or has side effects.
The reason there rarely are good handles to do things differently in accessibility context, is because it's not something that's meant to be handled differently.
See I work in accessibility. Like I provide and create solutions direct to end users with complex needs. Not regular web accessibility. I get the view of this. It’s the same idea of universal access. But actually I don’t fully agree. Yes. If you can stick to this principle - and do try / but I promise you edge cases - which in itself is what accessibility users are all about - cause headaches. At some level you have to do custom stuff. It’s the best way. Take for example switch users. Yes. If your ui is tab able - great. But what if you need your items scannable in frequency order. Your tab index needs to change to meet the end users needs. Or eye gaze users. The accuracy level changes. Add in cognitive issues. You can’t just make a one size fits all interface. At some stage you need to significantly customize it. You can’t rely on a user just learning a complex system level interaction technique- if they can’t do that you have to customise on an individual level.
Of course there are edge cases, I work with accessibility too, for an app in the public sector where WCAG rules are no joke, so I know this as well but even so, we don't build custom accessibility UI for our users. We (try to) build the UI with accessibility in mind so it's scalable, can be used and navigate properly by voice over and keyboard.
On mobile it's not perfect either but in general you do have features to change stuff like. focus, grouping of elements, how the keyboard navigate the view stack, how to access a button through custom actions and like you mention, change the tab index programmatically.
Even so, not everything can be fixed or handled through standard accessibility means and as such hacks will inevitably make it into the products.
I get what you're saying but I still think that making things accessible and designing with common accessibility in mind should be default and as such it has to be thought about when designing and developing from the get go. Having to create custom interfaces to fulfill a specific need might be a good fit for some things but not when developing apps and websites unless you're targeting that user-group specifically.
Well said! It certainly applies to web development as well. Sadly, sometimes more complex solutions are needed - especially when based on user research.
> The first mistake the developer made, was that he wanted to create a different user experience between keyboard and mouse. Stick to what you get by default and design your components so they work for both usecases.
We have. The behaviour is mostly the same whether you're using the keyboard or a pointer (mouse/touch/pen). The only difference is that, for keyboard users, we want to turn off the animation and move the focus to the first link in the menu instead of focussing on the menu's parent <ul>.
The problem was that, as various devs have iterated on the menu over the years, it's broken the fallback behaviour. For my colleague on the funny multi-monitor set up, it should have fallen back to the keyboard no-animation behaviour with no real major difference to the UX, but instead it fell back to the no-JS experience.
So yes, generally don't try to be smart with accessibility, avoid ARIA attributes except where necessary, etc, but click events are the universal input event and work on any kind of input device and have perfect browser support. It's far better for accessibility using them instead of a mix of keydown and mousedown or pointerdown, and potentially missing other kinds of input events.
As I stated in another comment, if it was a scenario where there needs to be a major difference in behaviour between keyboard and pointers, then I would rather use separate keydown and pointerdown events.
The _mostly_ same behavior is what caused the problem though :P I'm curious, did these solutions come to pass because you had to make adjustments based on actual user feedback or was it just a developer trying to think ahead? I'm questioning whether forcing the user to tab to get to the menu item is a hindrance at all or whether the animation was a problem.
Maybe the former could have been solved using ARIA tags or maybe it would require bigger changes to the component itself. Accessibility is a roller-coaster for all these reasons alone.
> What is the benefit of the animation to the user?
Animations enhance experience by drawing attention to state changes and providing intuitive feedback to user actions.
If you don't find them engaging or useful, that's fine - and you can set prefers-reduced-motion to true on your client - , but many people do.
> What is the benefit of focusing on the menu’s parent to the user?
The first item was not interacted with nor navigated to, therefore it shouldn't be focused under normal circumstances. It would be unexpected behavior.
Focusing the first item in keyboard interactions is an accessibility hack recommended by W3C:
> Animations enhance experience by drawing attention to state changes and providing intuitive feedback to user actions.
> If you don't find them engaging or useful, that's fine - and you can set prefers-reduced-motion to true on your client - , but many people do.
The question here is not "does an animation have worth", but how is that worth tied to whether an onclick event originated from the mouse or the keyboard? Your reasoning applies equally to both, and thus leaves us still confused: why are we varying the animation by input device?
I don't actually agree, I think you can keep the animation and still make the content available immediately for screen readers. (And of course, keyboard navigation is not just for screen reader users!) Maybe someone else knows of some issue I don't.
> The first mistake the developer made, was that he wanted to create a different user experience between keyboard and mouse.
No, they wanted to make them the same. It's just to give a blind person the same experience as a seeing person requires different things because they operate differently for obvious reasons. For example, a blind person can't see when an animation has finished. They expect that menu to be available once they've triggered it. However, seeing people see the dropdown appearing and then go to use it once it's ready.
> Don't try to be smart when it comes to accessibility.
In all seriousness, considering the state of accessibility as is, I think going outside the box isn't trying to be smart. It's actually being smart. The BBC frontend team is probably at the forefront of making high-traffic websites extremely usable.
> a blind person can't see when an animation has finished. They expect that menu to be available once they've triggered it. However, seeing people see the dropdown appearing and then go to use it once it's ready.
A blind person can and should get cues from their assistive technologies that an item is is being loaded and is shown, either using announcements or aria tags that provide this information to the user.
While its fine to expect that something is available immediately, that's rarely a realistic expectation, whether you're blind or not.
> For example, a blind person can't see when an animation has finished. They expect that menu to be available once they've triggered it. However, seeing people see the dropdown appearing and then go to use it once it's ready.
For my two-cents, the BBC was simply trying too much to be "cutesy". Don't animate anything, because the silly animation on mouse click simply makes the website feel slower overall. Just open the menu as fast as the user's browser will open it.
That wouldn't change anything. They want the first element of the menu to be focused when "clicked" from a keyboard but not from a mouse. The animation doesn't affect that.
Animation helps to correlate screen elements. Without animation it actually takes longer to establish the mental relationship between the action and the result.
I prefer the line: “make it as simple as possible, but no simpler”
Sometimes complexity is simply the right tool for the job. Complexity is essential and valuable in all sorts of places - like fuzzers, LLMs, compilers, rendering engines, kernel schedulers and so on. But projects only have so much complexity budget to spend. I think I've spent my whole career trying to figure out how to spend complexity wisely. And I could spend decades more on it.
The BBC site has a "search box" that's actually a button that brings up the real search box. Always feels confusing. At least it's consistent across News / Sounds / iPlayer.
I think there is no browser bug here, though using negative screen coordinates is probably going to be surprising to a lot of folks.
However, the BBC's intent seems quite sound to me from an a11y point of view, and their commitment to a11y is commendable. Though it's likely that for some browsers their attempts at defining their own a11y experience will result in a bad UX anyways.
While I understand your question is about the Transit app in general, I'd just like to mention in correlation to the article that my team and I worked with one of the public transport operators in Denmark, to utilize the motion predictability feature found in Android and iOS SDK's, so I can enlighten you with some details regarding that.
Our conclusion was that the feature didn't work in the danish metros for reasons we never got to deep dive into. It's most likely related to the fact that many of the metro stations are built in concrete, as such there's no GPS data in most of them unless you're very close to or on the surface and no motion data.
I'd be surprised if they got this particular feature working but who knows... maybe if we had looked into the raw sensor output we might have been able to work something out.
In the end we made a solution to help determine when you're moving or not by utilizing beacons.
The whole concurrency agenda with local reasoning sounds great in theory but in practice becomes such a pain in the ass for projects that has existed for years.
Maybe our current app has unknown data race bugs, maybe not... with a crash free session percentage of 99.80% and hundreds of thousand monthly users, it's not something I consider a big enough problem, to the point where more friction to the language should be added to maybe fix it.
This is pretty much the conclusion we also end up at, data race issues aren't our main issue right now, although zero would be a nice to have. Everytime I've tried out Swift 6 language mode I also feel like I'm sometimes appeasing or tricking the compiler rather than thinking about actual problems.
My personal device is a Motorola Razr 50 Ultra, which I got because while it's huge when flipped, it's portable enough when it's closed. I can have it in my pocket without it falling out.. without it being annoying while i put on shoes etc...
I use its cover screen a fair amount too, to avoid having to flip it open, which is also why I got the ultra rather than the slightly smaller version.