Dump It.
T-C logo

T-C

AT&T, Inc. - 4.75% PRF PERPETUAL USD 25 - 1/1000th Int Ser C

Price Data Unavailable

About AT&T, Inc. - 4.75% PRF PERPETUAL USD 25 - 1/1000th Int Ser C

View all WallStreetBets trending stocks

Premarket Buzz
212
Comments today 12am to 9:30am EST


Comment Volume (7 days)
289
Total Comments on WallstreetBets

5575
Total Comments on 4chan's biz

View all WallStreetBets trending stocks

Recent Comments

Hoping we get a small pullback tomorrow so I can buy more calls. Accidentally sold my calls today in my cash account and T+1 rekt me
> Reddit stock jumps 7% as Meta, Google potential deal chip boosts social media sector The way they try to explain every movement in the market. You really can’t make this sh*t up.
Dibs on Hi-C
Has anyone considered what happens if AI ISN"T a bubble. For the ROI based on what they're pumping into it it basically needs to replace every service industry. In which case that's a lot of unemployment....
Pytorch/tensorflow work with both CUDA and TPUs so this isn't really true... anyone worth their salt in the AI industry already knows pytorch and tensorflow, and maybe JAX. The main benefit of CUDA is that you can directly access low level GPU kernels via C++/CUDA, whereas for TPUs you must use high-level languages only like pytorch and JAX. However since TPUs are built only for matrix multiplication, whereas NVIDIA GPUs are built for general purpose compute, you don't actually need the low level access to TPUs like you do for NVIDIA GPUs.
I have been buying NVDA since 2020. Sold most of it. Idgaf about it anymore. But trying to make them look like they are about to be bankrupt tomorrow just because Google has some cool TPUs is being short-sighted. You had AMD way before as a much powerful competitor and none of you said sh\*t.
Calling a Tensor Processing Unit “slow as sh*t compared to a Hopper or Blackwell GPU” oversimplifies and is often wrong for the kinds of workloads TPUs target. Modern TPUs are specialized matrix-math engines that can deliver very high FLOPS, strong performance-per-watt, and large cluster scale on well-matched deep learning workloads, often rivaling or beating contemporary GPUs in throughput and efficiency in Google-scale deployments. TPU's and GPU's target different markets: Google primarily uses TPUs to optimize its own services and cloud, while Nvidia sells hardware and a platform into a broad external market. Custom accelerators from hyperscalers (Google, Amazon, Microsoft) reduce their dependence and margin paid to Nvidia at the margin, even if these chips are not widely sold externally, and that is exactly why Nvidia’s investors watch them closely.
Listen up, regards. Ex-Googler here. Did my EE degree at MIT before spending years in Mountain View, so I’m going to try and explain this slowly so even you lot can understand why this TPU FOMO is absolute garbage. I am still bullish on GOOG in the long run and have quite a bit of my net worth tied up in the stock. HOWEVER: You guys are drooling over "TPU" like you actually know what an ASIC is. A Tensor Processing Unit is a scalpel; an Nvidia GPU is a Swiss Army Knife with a chainsaw attached. Here is the engineering reality: TPUs are specialized ASICs. They are decent at matrix math for specific internal workloads, but individually? They are slow as sh*t compared to a Hopper or Blackwell GPU. To match Nvidia’s raw throughput, Google has to daisy-chain thousands of these things together. It is not "efficient", but represents a massive hardware tax and latency headache that doesn't show up on the spec sheet. THE MOST IMPORTANT PART WHICH YOU FORGET is the Software Stack. Nvidia doesn't just sell chips but CUDA. The entire planet's AI infrastructure, from Robotics to AVs, is built on Nvidia’s stack. You don't just "switch" to TPUs. Porting a massive, production-level AV stack to run purely on Google’s custom silicon is an engineering nightmare. Nvidia has a moat wider than your wife’s boyfriend’s ego because of this software lock-in. Google buying Nvidia chips while making TPUs isn't a sign Nvidia is dying. It is a hedge. It is called circular supply chain management. No massive hyperscaler wants a single point of failure. Even if Nvidia loses 10% or 20% market share to these internal chips, the Total Addressable Market for compute is expanding so fast it doesn't matter. Nvidia loses a slice but the pie is getting 10x bigger. Stop FOMOing into GOOGL thinking they just killed Jensen. They didn't. They'll both do well but NVIDIA is still king. End of rant
NVDA $35. GOOGL $1,000. Confirmed. The PR team at Nvidia should be fired for this dumb sh\*t.
Listen up, regards. Ex-Googler here. Did my EE degree at MIT before spending years in Mountain View, so I’m going to try and explain this slowly so even you lot can understand why this TPU FOMO is absolute garbage. I am still bullish on GOOG in the long run and have quite a bit of my net worth tied up in the stock. HOWEVER: You guys are drooling over "TPU" like you actually know what an ASIC is. A Tensor Processing Unit is a scalpel; an Nvidia GPU is a Swiss Army Knife with a chainsaw attached. Here is the engineering reality: TPUs are specialized ASICs. They are decent at matrix math for specific internal workloads, but individually? They are slow as sh*t compared to a Hopper or Blackwell GPU. To match Nvidia’s raw throughput, Google has to daisy-chain thousands of these things together. It is not "efficient", but represents a massive hardware tax and latency headache that doesn't show up on the spec sheet. THE MOST IMPORTANT PART WHICH YOU FORGET is the Software Stack. Nvidia doesn't just sell chips but CUDA. The entire planet's AI infrastructure, from Robotics to AVs, is built on Nvidia’s stack. You don't just "switch" to TPUs. Porting a massive, production-level AV stack to run purely on Google’s custom silicon is an engineering nightmare. Nvidia has a moat wider than your wife’s boyfriend’s ego because of this software lock-in. Google buying Nvidia chips while making TPUs isn't a sign Nvidia is dying. It is a hedge. It is called circular supply chain management. No massive hyperscaler wants a single point of failure. Even if Nvidia loses 10% or 20% market share to these internal chips, the Total Addressable Market for compute is expanding so fast it doesn't matter. Nvidia loses a slice but the pie is getting 10x bigger. Stop FOMOing into GOOGL thinking they just killed Jensen. They didn't. They'll both do well but NVIDIA is still king. End of rant
View All

Next stock TACA+

Previous stock T-A