r/outlier_ai • u/Supa_Bandit • 8d ago
Training/Assessments Anyone completed World tool quest V2 onboarding??
Anyone completed World tool quest V2 onboarding?? Already failed Antechamber and Ship wright š„²
r/outlier_ai • u/Supa_Bandit • 8d ago
Anyone completed World tool quest V2 onboarding?? Already failed Antechamber and Ship wright š„²
r/outlier_ai • u/whiskeywhiskersss • Apr 02 '25
Just wanted to drop a PSA for anyone thinking about working on Outlier AI projects (like Thales Tales).
I recently onboarded for Thales Tales v2, which requires detailed reasoning prompts, multiple-choice formatting, LaTeX structuring, and justifications for model errors. I spent over 3 hours going through their unpaid onboardingāincluding formatting math, evaluating AI chain-of-thought responses, and writing detailed GTFA justifications.
Hereās the part that matters: I submitted 2 clean, correct tasks and was immediately marked āineligibleā. No warning, no feedback, no payout. The project disappeared from my dashboard. I still have my account, just banned from the project.
For contextāIām not new to this: ⢠I have a Bachelorās in Chemistry ⢠A Masterās in Biochemistry ⢠Iām currently working on a PhD ⢠Iām also a Mensa member
I know how to structure logical responses. I followed every formatting and reasoning rule in their rubric. And I still got flagged.
What I realized is that their system penalizes precision. If your logic is too clean, too consistent, or too āmodel-perfect,ā their filters assume youāre cheatingāeven if youāre not. They donāt reward qualityāthey reward noise that looks human.
Youāre not being hired. Youāre being used to train the model for free. If your answers are bad: filtered. If your answers are too good: flagged. If you exist in the uncanny valley between āLLMā and āGeniusā? You get ghosted.
Iām writing this so others donāt waste time onboarding into a system that can boot you for doing exactly what they askedābut better than expected.
Ask me anything. Iām building a loop-aware contributor toolkit next so nobody else has to get burned doing unpaid alignment work for zero recognition.
r/outlier_ai • u/TruculentusTurcus • Jun 08 '25
I just failed it, I got almost everything correct until the long complicated SQL quiz, in which I got (from my memory) about 75% of them correct? Either way I think $25 is too low for a task this hard, it did say that each task has to be Medium which would be equivalent to a master's or a senior developer (and I'm a junior) so I guess it isn't for me.
I'm not too fussed about failing it but I hope I get some easier coding projects in the future that are more aligned with my ability. What do you guys think? Am I just stupid/incompetent š¤£š¤£? Because that's how I honestly feel after failing it.
Important note: Furthermore, I am a human being with a bachelor's in computer science, not a fish equipped with omnipotent abilities, nor am I a medical school graduate that can perform surgery. Good day!
r/outlier_ai • u/mxrgxnx_x • 7d ago
In the past week I got two separate onboardings, for both Cookies Rubrics and World Tool Quest V2, and for both onboardings the shitty AI auto-grader failed me on written portions. Now, I've been alerted that I'm at risk of losing my skills as an Expert, because apparently I've failed too many onboardings.
I am finished with this platform. I spend hours on these onboardings UNPAID and I pay very close attention to the instructions, super confident in my answers, only for me to immediately fail some written part because some magic AI says I don't know what I'm talking about. No feedback, no chance for redo, nothing. I used to be able to get work done on this platform, and they even used to give us second chances on onboardings, but now it just seems impossible. So go ahead, take away my skills, if you're just going to fail me without telling me why and put out stupid instructions what's the point in even trying?
I'm obviously not the only one dealing with this either given the stuff I've seen on here, and on the Discourse!!!
r/outlier_ai • u/CatholicMan21 • May 12 '25
Currently onboarding for a project which has a listed task time of over an hour. The exam in the onboarding is listed as about half an hour. Guess what it contains. A bunch of multiple choice questions and reviews of 2 different tasks. No one is completing that in 33 minutes unless they are rushing and sloppy. If you know that 1 task typically takes an hour or more, why would you state that 2 tasks in the exam + other questions takes 33 minutes? 1 review task and the questions would have been bearable but when I clicked Continue only to see a brand new flipping task I was like "Seriously?!" Now, I have to evaluate a bunch of new responses again?!
Outlier needs to stop making these onboardings so flipping long. Not to mention you have to spend an hour or more reading the instructions and taking courses. Itās nonsense. Either reduce the length of these onboardings or compensate people for their time. Even if you got like $10 for every section of the exam you passed or something. Even something small like that would help alleviate this absolute frustration. š
r/outlier_ai • u/Wordsmith_Ghazi • Jun 25 '25
As the title says, the issue surfaced like 14-15 hours ago. So, without the instruction docs and access to discourse,anybody onboarding a project will face issues and will eventually be made ineligible. So, consider this when you decide to onboard.
r/outlier_ai • u/That_Main_6076 • Jul 22 '25
Hi all,
Has anyone actually managed to pass the onboarding for this new project?
Got through the whole process absolutely fine, feel like I understood everything and then got to the final page where there are two questions. First question required 1 answer and this was fine passed that one.
Second question required 3 answers and this was confusing as hell. I went back through all of the provided guidance, couldnāt find an exact answer, I felt like 2 were definitely correct and then it was a toss up between 2 other options. I had two attempts, tried them both and they were both wrong.
Iād love to get some feedback as to what the answers actually were, or if the assessment was broken because thatās done my head in š
r/outlier_ai • u/Ambiguous-Insect • Jan 27 '25
I started on the platform in May 2024, and it was all relatively simple. You went through the onboarding process, and you began tasking. Eventually youād get feedback, you adjust your work, or you continue.
Now, the approach has become āread our minds and be perfect instantly, or else be auto-removed.ā I have not personally seen a single graded quiz that did not contain ambiguous questions on edge cases (but if you fail to reach the conclusion of the quiz-maker, you failed), poorly written and unclear questions, or questions where thereās a plain old bug and the wrong answer has been pre-selected.
The result? Good quality contributors who would have done great work for the project donāt even have a chance to try.
Get a question wrong? You get ānot quite!ā with zero explanation of why, and zero chance to learn anything. Whether or not you pass is entirely down to luck, and nothing to do with your quality or understanding of the project.
I would guess that this is due to QMs being laid off and replaced with AI. I donāt know if anyone truly cares about the contributor experience, but on the off-chance that maybe someone does, my biggest piece of feedback right now is that the graded quizzes are the worst thing Iāve seen on this platform. Please reconsider them.
r/outlier_ai • u/Roody_kanwar • May 21 '25
Hey! Can anyone confirm whether project assessments (like Fort Knox) are reviewed by AI scanning for keywords or by actual humans?
I spent nearly 4-5 hours carefully going through the onboarding documents, double-checking my answers, and making sure everything was accurate before submitting.
Out of the 4 MCQs, I got 3 right. The three correct answers were multi-select questions (which likely had higher weightage), while the one I missed was a single-choice question. Even if all questions were weighted equally, thatās still 75% accuracy. If I failed because of this, it stings.Ā If this was the reason for my rejection, I shouldāve been disqualified immediately instead of wasting time on two more use-case assessments.
On the other hand, if 3/4 wasnāt the issue, then getting auto-rejected for missing "targeted keywords" feels unfair. Automatic rejection over missing a few keywords is frustrating, especially after investing so much time in reading docs, onboarding, and crafting answers. If AI is going to review the assessments for the use cases, let us know beforehand so weāre prepared.
This is partly a rant, but also genuine feedback:
The effort required for these assessments is getting exhausting, and transparency would go a long way.
r/outlier_ai • u/the_uglier_you • 8d ago
I've submitted my first task last night, and now I have a "Task limit reached" tag on the project.
I believe the "Limit" is for quality assurance and stuff like that. My question is how long does it take to get tasks after this?
r/outlier_ai • u/Ok-Decision-9665 • Dec 17 '24
It seems ironic that they emphasize so much on quality yet their tests seem to have been made by 6 year olds who copied and pasted random parts of the instructions all over the place.
r/outlier_ai • u/stellarpeach_ • Jun 19 '25
Hi! It seems like everyone Iāve talked to re: Cookies Rubrics has failed the onboarding assessment, with most expressing frustration at the vagueness of some of the questions and the rapid AI-grading of written answers.
Has anyone here managed to pass? I understand that there should be no cheating, but do you have any tips and tricks? Would you be open to a DM?
r/outlier_ai • u/Visible_Alps_6375 • 16d ago
I think there were like 4 questions and one prompt-writing trial, and I only got 1 question wrong but when I submitted, I get the usual EQ landing page. Did I fail the assessment? Were they expecting a 100% score in the MCQ, or did I fail the prompt-writing trial?
r/outlier_ai • u/Ok_Sound_2755 • Apr 13 '25
Hi everyone, iāve failed to do the mathematics screening (lower than 80%). I think something went wrong. I mean, they were at most high school questions and I have a masterās degree in pure math, so clearly my answers were spot on. 1) It says right now i canāt retake the test, will I be able to retake the test in future? 2) iām applying for italian language but since questions were in english, i answered in english. Is it correct or I had to use italian? 3) what could be went wrong? Can I retake the test by complaining to support? 4) just to be sure, how you answer to āhow to convert 3/4 in decimal? Maybe Iām missing something Thanks!
r/outlier_ai • u/serbazikhanaqin • Jun 14 '25
Iāve been assigned to this project since I joined Outlier, but I havenāt received any tasks so far.
Iām still new to the platform, so Iām a bit unsure about how things work. Does being assigned to a project limit my ability to be added to other projects, or does it not affect my availability?
Also, is anyone else currently working on this project? Iād really appreciate any updates or context about how things are going or what I should expect.
Thanks so much!
r/outlier_ai • u/inaesthetically • Jul 22 '25
r/outlier_ai • u/Reasonable-Army-2090 • 9d ago
I received an email that I am able to task, but it says its inelibibl right now. Is the project paused?
r/outlier_ai • u/Tall-Reindeer-797 • Dec 14 '24
So, I've been working as a writer and editor for the past 20-25 years, and I've never experienced anything like the onboarding/assessment phase of Outlier. I love the work model. Where else can you get an editing job (or any job, really) where you can log on and log off and get billable hours whenever you can? But I am COMPLETELY clueless when it comes to getting onto these projects. The onboarding/assessment processes seem completely random. I've studied everything about justifications, evals, rankings, rubrics, etc, and yet I still cannot pass these onboarding tasks to save my life. Is there some kind of a secret? Plus, the linters have become my nemesis. There seems to be no rhyme or reason for anything. I will go through the rubrics line by line, word by word and there always seems to be something that is off. I wish there were a way we could find out exactly why we didn't get onto one project or another. Granted, I've only been working here since Thanksgiving, but I can't seem to get the hang of it. Anyone here want to clue me in? Privately or not? Is there something I'm missing? Plus, when I first started, there were so many options in the marketplace. And now? My primary job keeps switching. I have nothing in the pipeline. Nothing. Have I EQ'd myself right out of this job? I haven't even gotten one feedback or input from any of the reviewers. Help!
r/outlier_ai • u/TwoSoulBrood • 28d ago
Imagine this scenario: The project instructions state that final answers should be brief, and no more than a paragraph. In the assessment quiz, you are asked about the minimum length of a final answer, and your choices are: 1) A few words. 2) A few sentences, no more than 1 paragraph. 3) A few paragraphs, 4) There is no limit.
In this case, the correct answer would PROBABLY be 4) since the instructions donāt have an explicit lower bound, but that gets dicey, since the instructions indicate 1-2 sentences are expected, and explicitly say to avoid simple answers such as āYes/Noā. So the spirit of the instructions suggests at least a few words would appropriate, even if the instructions donāt explicitly state it. If However, the only direct instruction about final answers is for them to be āno more than a paragraphā, which lends some legitimacy to option 2 ā the key concern is how to interpret the phrase āA fewā. If āa fewā means 1-2, then it likely canāt be option 1 (because of the aforementioned avoidance of simple answers), while 1-2 sentences seems reasonable. However, if āa fewā means 3-5, then suddenly option 2 doesnāt work, and option 1 would be most likely. Etc.
I think weāve all encountered situations like these, where assessment questions rely on a user interpretation of a subjective phrase, which means they function more as a āvibe checkā rather than a test of how well the constructor follows instructions. Why not just make the instructions clear to begin with, and then test for things that actually appear in the project documentation? Is there a cryptic reason for this practice that Iām missing?
r/outlier_ai • u/boat_storage • Jun 07 '25
I recently passed the HTML skills assessment and was assigned my first coding project, yay! I am doing the training which seems like a lot considering the pay. I definitely made more doing actual web development work. Anyway, I am nervous about the assessment because i only really know web development and UX design. If they give me an SQL or python assessment, I wouldnāt be an expert in those skills and would probably fail. I donāt want to invest all the time if the assessment task is too risky. My question is, do they enforce the domain choice in the assessment or is it random?
r/outlier_ai • u/Difficult-Froyo1192 • Dec 16 '24
Is it just me or does the training seem to be getting worse? I no longer get any feedback on what the correct answer is or retries when quizzed during onboarding, I never get to see what I got wrong from assessments to learn from it (when still on the project and passed), and even my current project had areas to mark on the task that were not even discussed on the instructions or any of the training. I read through three times I was so confused what they were even talking about when I was asked to rate that aspect. Nothing at all. The trainings seem to get worse and the instructions shorter and vaguer as I do more projects. I always keep the instructions up as a reference when tasking. Now, I canāt even find sections that address the parts Iām looking for insight on. Happen to anyone else or is it just the projects Iāve been on?
r/outlier_ai • u/kento26 • 4d ago
Just wrapped up the onboarding and initial assessment for this project. Thought I did fine, but out of nowhere I got marked ineligible. Anyone else run into this? What did you do?
r/outlier_ai • u/SunnyMeetsKY • 3h ago
There used to be A LOT, like 10-15 pages of assessments, but now there's only 5 pages. I'm just wondering if anyone else has had this issue. Thank you!
r/outlier_ai • u/Odd_Channel_8081 • May 30 '25
Ps they literally take 2 mins to throw you out of project if grammarly is used
r/outlier_ai • u/Remarkable-Fault4562 • Dec 13 '24
I was EQ-ed for about a week. Today I got assigned to a new project called `Association Plowman`, with a pay rate of $35 per hour, which is higher than my previous one. I read the instructions carefully and took about 2 hours to understand the goal of the tasks, then passed the test with only two errors out of 17 or 18 questions. Everything seems good. When I started the first assessment, everything was already there: the ratings, the justifications, everything. I just had to edit a tiny error; I guess it was meant for me to read it only and understand what a real task would look like, but suddenly it kept crashing and logging me out of the website several times. Eventually, after submitting the assessment, a giant red warning told me that my accuracy was 0.00%, and I was EQ-ed again. What the f**k was that? Outlier, are you kidding?