Bears unite!
TASK logo

TASK

TaskUs, Inc.

Price Data Unavailable

About TaskUs, Inc.

View all WallStreetBets trending stocks

Premarket Buzz
0
Comments today 12am to 9:30am EST


Comment Volume (7 days)
15
Total Comments on WallstreetBets

12
Total Comments on 4chan's biz

View all WallStreetBets trending stocks

Recent Comments

I can multi task
Gotta train a new guy at work this week guess he's getting a crash course in options trading. After every task we check on the port
I can multi-task and my future is in prop bets
The Penguin was good and now Chair Company. Barry and Mare of Easttown / Task were ok but ya mostly trash since early GoT / True Detective s1. Definitely nothing at the level we all hope for with HBO being the creator of most of the best TV shows ever.
Just the first one that came to mind. Task is decent, the Penguin is great, I haven’t watched but The Last of Us has been well received, I also liked Dune Prophecy, you couldn’t escape Succession for a few years there. Just didn’t think I needed to run through their whole catalog for a Reddit comment.
Task was a pretty good show that came out recently
Cannot crush without getting stabbed multiple times imo. I think she’s set on finishing the task
Regarding Google and Amazon Custom AI Chips TPUs (v5e, v6) are optimized for training and inference and are tightly integrated with Google Cloud. Trainium3 (training) and Inferentia2 (inference) offer high performance at lower cost and energy. TPUs are native to Google Cloud’s Vertex AI, enabling seamless scaling for LLMs. AWS is the largest cloud provider. Trainium3 is embedded in UltraServer systems for enterprise AI. TPUs are often cheaper and more power-efficient than general-purpose GPUs. Trainium3 uses 40% less energy and delivers 4x the performance of its predecessor. But Google designs chips for its own AI workloads (like Search, Bard, YouTube). And Amazon uses its chips to power Alexa, AWS services, and internal LLMs. With that said, NVIDIA still controls 80–90% of the AI training chip market, but custom ASICs are growing faster. There obviously is some hyperscaler defection risk. Meta, Google, Amazon, and Microsoft are all designing in-house chips to reduce reliance on NVIDIA. Meta is testing Google’s TPUs for future workloads. But they can’t replace their reliance on Nvidia anytime soon. NVIDIA’s GPUs can sometimes be considered overkill for some specific inference tasks. Meaning they are more powerful than a specific task may require. Amazon’s Inferentia2 and Google’s TPUs can offer cheaper, more efficient alternatives for some very specific production-scale inference. As hyperscalers shift to in-house chips, NVIDIA may face pricing pressure and reduced volume in its highest-margin segment. Yet NVIDIA Still Leads. CUDA Ecosystem Lock-in makes it difficult to switch. Developers are deeply entrenched in NVIDIA’s software stack, making switching costly. Nvidia has substantial performance leadership. Blackwell GPUs remain the gold standard for training frontier models. Nvidia is also working in the next gen Rubin line that will be released in 2026 which will make a clear statement of continued dominance. Nvidia has full-stack AI Infrastructure. NVIDIA offers not just chips, but networking (NVLink, InfiniBand), systems (DGX), and software (TensorRT, NeMo). So, outlook? Some fragmentation, but not outright replacement Hyperscalers would love to produce everything NVDA does in-house and maintain quality standards but they simply can’t and therefore won’t replace NVIDIA, but they will carve out share in specific domains (like inference, internal workloads). NVIDIA’s biggest risk is losing hyperscaler loyalty, not because of inferior tech, but because of cost, control, and vertical integration. But these are problems that can be resolved. By 2028, NVIDIA is projected to lose some AI chip market share to custom ASICs from Google, Amazon, and others but it will remain the dominant player in training workloads. Regarding AI Chip Market Share Projections (thru 2028) NVIDIA will still have appx 80% (training), 60% (overall). Still dominant in training but loses some inference share to ASICs. Google (TPU) appx 5–7%. TPU production could reach 7M units by 2028. Amazon (Trainium/Inferentia) appx 3–5%. Gains in inference, especially within AWS. AMD appx 5% - 10% MI300X adoption grows, especially in cloud and HPC. Intel (Gaudi) <3%. Gains traction in cost-sensitive enterprise AI. Others (startups, China) appx 10%. Includes Hailo, Tenstorrent, Huawei Ascend, and domestic Chinese players. En resumen… by and far NVIDIA Still Leads CUDA ecosystem lock in creates a moat. Developers and enterprises are deeply embedded in NVIDIA’s software stack. Blackwell and its successors remain the gold standard for training frontier models. This is training dominance. Full-stack integration from chips to networking (NVLink, InfiniBand) to software (TensorRT, NeMo), NVIDIA offers unmatched vertical depth. Fin
I mean I just use chat gpt plus but it has agentic mode for example where it will work in a virtual machine for up to several hours to accomplish a task, it'll browse the internet code, whatever. for more simple tasks it has thinking mode where it will write out a bunch of thoughts to itself before writing it's actual reply, and you can read them by clicking the "Thought for 1m 4s" bubble. I haven't personally used other models much but as far as I can tell thinking is the standard now.
Yes models will only get better, but I think previously they had a moat, an edge over the other companies. They were performing leagues ahead of the competitors. Their research output were phenomenal - whisper (one of the best Speech to text model released, and still commonly used even though it's about 4 years old.already) CLIP - ground breaking work that enabled general image/text retrieval, and multimodal models that we see now. GPT 3.5 - I think the closest at that time was T5 and the difference was huge, their training procedure pretraining, instruct fine tuning followed by RLHF gave us a recipe that made amazing chatbots.Dall E, O1, "reasoning" and Sora etc. They were almost best in class at that time for all of them. But now they seem like they're slowly lagging behind. No good research output and Scam Altman consistently over promised with GPT 5. Made several statements about it being like AGI, but when they delivered, it was barely better than the competitors. Still remember bard? It was trash lol. Google was the laughing stock just 2 years back. But now Nano Banana Pro ,Gemini 3 ,Veo 3, Genie 3. They're all really damn good. I do think you're right, the models are at a stage that are generally good enough to automate several low stakes task, if there's a human in the loop it can take be quite useful as an assistant. But given how much money has gone into OpenAi. The fallout will be historical if they continue this path. In the last 2-3 years scaling compute has delivered really good results. The narrative of whoever has the most compute will be the first to deliver AGI. Now I'm not sure we are there, or will ever get there with just scaling - look at Meta's Llama 4 Behemoth, a 1Trillion parameter model that was not released. Meta's llama 4 Maverick - a smaller version of it gave us a glimpse of how shitty it was - but still insanely huge and expensive to train. "I'm here for one reason and one reason alone. I'm here to guess what the music might do a week, a month, a year from now. That's it. Nothing more. And standing here tonight, I'm afraid that I don't hear a thing. Just silence." Scaling I think will still continue to deliver better models, but it does feel like quite a plateau. We are facing real bottle necks - companies are spending so much to build data centers but there's literally not enough water or power in the powergrid to support the growth. Hardware companies are unable to produce enough to support the growth either (a good problem I guess??) I'm not sure how much longer investors will allow this kind of burn and not expect any returns.
View All

Next stock BZ

Previous stock MNDY