HODL
WATT logo

WATT

Energous Corp

Price Data Unavailable

About Energous Corp

View all WallStreetBets trending stocks

Premarket Buzz
0
Comments today 12am to 9:30am EST


Comment Volume (7 days)
5
Total Comments on WallstreetBets

3
Total Comments on 4chan's biz

View all WallStreetBets trending stocks

Recent Comments

Calling a Tensor Processing Unit “slow as sh*t compared to a Hopper or Blackwell GPU” oversimplifies and is often wrong for the kinds of workloads TPUs target. Modern TPUs are specialized matrix-math engines that can deliver very high FLOPS, strong performance-per-watt, and large cluster scale on well-matched deep learning workloads, often rivaling or beating contemporary GPUs in throughput and efficiency in Google-scale deployments. TPU's and GPU's target different markets: Google primarily uses TPUs to optimize its own services and cloud, while Nvidia sells hardware and a platform into a broad external market. Custom accelerators from hyperscalers (Google, Amazon, Microsoft) reduce their dependence and margin paid to Nvidia at the margin, even if these chips are not widely sold externally, and that is exactly why Nvidia’s investors watch them closely.
TPUs shine for big, steady transformer jobs you control end to end, but GPUs win on flexibility and time to ship. Most stacks are PyTorch/CUDA; JAX/XLA on TPU is fast but porting hurts, and custom kernels/MoE/vision still favor H100/L40S or MI300. v5e/v5p are great perf/watt for int8/bfloat16 dense matmuls, less so for mixed workloads. On-prem TPUs are rare; independents buy GPUs because drivers, support, and resale, while trading shops with tight regs sometimes get TPU pods via Google. Practical play: rent TPUs on GCP for batch training, keep inference on GPUs with TensorRT-LLM or vLLM. We use vLLM and Grafana, and DreamFactory just fronts Postgres as a REST API so models pull features without DB creds. Net: TPUs for fixed scale, GPUs for versatility.
I work for Google (janitor, floor 17) and from what I’m seeing, Google’s TPUs has more performance per watt than NVDA can ever achieve with their architecture
I didn't see that argument in your post though. You don't even mention 'IFS' once. I see arguments why AMD is bad and not in demand, while "INTC is shining and thriving", which certainly isn't the case in the data center. AMD EPYC is in far more demand than XEON. And I still don't see any next-gen Intel CPUs able to compete in performance-per-watt. So on the foundary side - are you saying Intel is going to make their foundry totally vendor agnostic, competing head-to-head with TSMC, and open up their manufacturing lines on a competitive contract basis to all vendors - like Broadcom, AMD, Apple, and NVIDIA silicon and break away from Intel-centric designed CPUs/GPUs/NPUs/etc..? (If yes - then awesome; I agree it's a good idea and I'd buy shares in that too)
Look at the power draw on the new Intel-core-ultra 290k to 250k plus line up: it's still a joke. They all start at 125 W and climb into the 250 W range. This is why these CPUs can't be put in ultrabooks, miniPCs, or chromebooks, and likewise (by extension the same design in high-core + high-cache next-gen XEONs) will be terrible for use in datacenter rack-mounted multi-system blade slots; poor IPC/W versus EPYC! I know gamers don't care about power draw and generally want the highest performance on just a couple cores. Sure - these chips are going to be a good fit for those gamers and they'll win some benchmarks. But that's not where the big money is. I'll buy INTC when their latest gen CPUs finally beat AMD and Apple's CPUs in IPC-per-watt, which are currently the pareto edge.
View All

Next stock WB

Previous stock WASH