I pontificated earlier (😅) about the challenge of using a 5-star review system (anywhere, but particularly on the ATStore). I'd love to see us embrace other mechanisms that could be more informative on the breadth of what the Atmosphere offers, and might even provide a lexicon that other Atmospheric apps might lean on for reviews too!
The big challenges I see with written & 5-star reviews:
- As Ariel mentioned, stale reviews mean you often need to play close attention to when the review was and what changed on the platform since — who has time for that when browsing lots of potential things?
- Having a time component to how reviews are surfaced seems important
- It requires a bunch of effort from a potential reviewer to summarize the entire app, and moreso for it to be accurate even from their own perspective; which means folks either don't bother, or pick a particular aspect to shower praise or criticism upon.
- Making it super easy and fast for anyone to offer a high-context review would be excellent… if possible
- If folks find something they don't like about an app, the reviews can become a place to 'shout into the void' in a way that can appear overly entitled, stir discontent, and produce pile-ons, when it's a reasonable & genuine frustration (if perhaps ungracefully framed)
- Finding a way to channel frustration into something owners can channel into improvement seems important
- Ensuring there's good context as to what a reviewer should expect from leaving a review seems important
- In small communities, and especially in open systems like atproto, your rating is also a means of support — a "critique in private, praise in public" kinda thing — so early ratings can seem rather one dimensional or fake
- Making sure there's a way to offer balanced feedback without being unsupportive seems important
- There are many interesting critiques of 5 star review systems, often pointing out how folks often vote 5 for 'good', 1 for 'bad' and 4 for 'excellent except for that one thing I want', making it very hard to get useful data out, even when there are lots of respondents.
- Gathering data that maximises the ability for useful analysis seems important
- A single thing often has different value in different contexts. 5 stars for novelty — I want to support this, 1 star for dependability — I wouldn't use it foundationally, yet.
- Allowing expression for nuance, without inflating complexity, seems important
What (I think) people want from a review system:
- To know the thing is worth investing a little time in (via social proof)
- it does what it says
- it's not scammy/doesn't exhibit any dark patterns
- To evaluate A vs. B, if there are two similar things
- To have expectations set. Eg. a thing trying to be artistic and beautiful at the expense of broad feature set vs. a thing trying to be fully featured at the expense of good looks
- … (probably much else not in my head right now 😅)
To that end, here's a pet idea I've had knocking around for a long time, that feels rather suited to atproto, that I call "is/not" reviewing.
- There is an open list of adjectives a review platform has defined for writing reviews.
- A reviewer is offered a handful of them* as three-way toggles; eg. "beautiful" → "is beautiful", "is not beautiful" or "not asserted" (the default)
- The reviewer picks the ones they have strong opinions about, and offers that "thumbs-up/thumbs-down" verdict on each of the adjective dimensions.
- You could capture a text review too, if you like
- The review system then aggregates these votes in any number of dimensions: how polished it is (is beautiful/is clean/is not work-in-progress), how fun it is (is exciting, is not boring, is artistic), and so on.
You can then extend this as the dataset grows
- Paired with a "Like" button, you can extract that a particular person always rates "is beautiful" for the things they Like → now you can up-rank things specifically marked "is beautiful" for that person
- You can use different languages for the adjectives, and map them in a non-1:1 fashion; "es rico" could boost an "is expensive" signal in Spain, but an "is delicious" signal in Latin America.
- (Hell; with enough data you can eventually ignore what the tag's text is entirely, and just use clustering algorithms to see how a person's ratings fit in with the wider ecosystem)
I feel this system could be particularly good across the Atmosphere particularly (given this kind of methodology could be a lexicon used across review platforms too!) — but this idea hasn't seen the light of day yet, please critique and adapt it; I'd love to hear your thoughts!
Generally: has the ATStore given any thought to other rating/review mechanisms? Do you have any other goals for what you have or will implement?
I pontificated earlier (😅) about the challenge of using a 5-star review system (anywhere, but particularly on the ATStore). I'd love to see us embrace other mechanisms that could be more informative on the breadth of what the Atmosphere offers, and might even provide a lexicon that other Atmospheric apps might lean on for reviews too!
The big challenges I see with written & 5-star reviews:
What (I think) people want from a review system:
To that end, here's a pet idea I've had knocking around for a long time, that feels rather suited to atproto, that I call "is/not" reviewing.
You can then extend this as the dataset grows
I feel this system could be particularly good across the Atmosphere particularly (given this kind of methodology could be a lexicon used across review platforms too!) — but this idea hasn't seen the light of day yet, please critique and adapt it; I'd love to hear your thoughts!
Generally: has the ATStore given any thought to other rating/review mechanisms? Do you have any other goals for what you have or will implement?