# 147 Claude Cowork vs. Perplexity Computer - Which One's Actually Better?
Ecom Podcast

# 147 Claude Cowork vs. Perplexity Computer - Which One's Actually Better?

Summary

The Corey Ganim Show shares actionable Amazon selling tactics and market insights.

Full Content

# 147 Claude Cowork vs. Perplexity Computer - Which One's Actually Better? Speaker 1: What if your AI was more than a chatbot? What if it could actually do work for you, not just give you answers to your questions? Well, that's exactly what Perplexity Computer and Claude Cowork promise to deliver. So in this video, I'm going to put them head to head. I'm giving them the same three tasks, making them run the same three workflows and showing you guys the real results. One costs $200 a month. The other costs only $20 a month. And honestly, the winner of this comparison kind of surprised me. So by the end, you'll know exactly which one is worth your money and exactly which one I recommend you use. So let's dive in. So the first task we're going to give these two tools and put them head to head is we're going to have them do competitive research, and then we're going to have them turn that research into a client-ready deliverable. So this is the prompt that we're going to use inside both Perplexity Computer and Claude Cowork. So I'm going to copy that, pop over to Perplexity Computer, switch it, make sure we're in computer mode, and then new task. Paste in the prompt and press Enter. While that is going to work, we're going to jump into Claude Cowork and do exactly the same thing. We're inside Cowork. We're going to test the same prompt and click Let's Go. We're going to run it on Opus 4.6. Make sure it's using the most powerful model. I'm not sure which model Perplexity Computer is going to default to here, but I'm going to come back once the prompt has been run on both tools and we're going to evaluate the output against the criteria that we've set. All right, now that both tools have run the prompt that we gave them, let's check out the output. So for what it's worth, Cowork finished about 60 seconds before Perplexity Computer. I'm not as concerned here with speed as I am with accuracy. So I've taken the outputs of both. I've put them into a Google Doc. I'm going to show you guys exactly what those Google Docs look like. Here's what Perplexity looks like as far as output. This is all the work that it did over the course of that prompt. It put together a one-pager and then we can one-click export it to Google Docs, which is exactly what I did. Here is Perplexity's output. Now, we want to make sure that we have an objective way to score these as opposed to just like, I feel like this one's better versus the other one. The way that we're going to score these is we're going to pick a couple, I'm not going to necessarily do five, but we're going to pick a couple of pricing or feature claims from each output and verify them against the actual websites and make sure that the information that it gave us is legitimate. And guys, if you want a copy of this Google doc here that I'm working from, if you want to go and test this out for yourself, check out the link in the description if you're watching on YouTube or the show notes, and there's a free download to get this Google doc and go run these tests for yourself. Again, this is Perplexity's output. We're just going to look at each one at a high level first. Remember, we wanted this to be a client-ready deliverable that we could literally copy and paste, download this and send this straight to a client. And subjectively speaking, this looks pretty good. Like this is what Perplexity Computer gave us. In a nice table format, it's aesthetically pleasing, it looks good, and they've got even a recommendation at the bottom. Best value, best free tier, most polish, and best for services. Pretty good so far in terms of what Perplexity Computer has to offer. Now, if we look at what Claude Cowork gave us, this is the output from Cowork. Right off the bat, remember one of the things we asked was for it to be a concise one-pager. It broke that rule right away. This is more than one page. This is about a page and a half. Now, that's not necessarily a deal breaker, but it's something to keep in mind that sometimes we need to go back and tell it to be more concise and to give us more of a one-page result. Let's go in and simply compare Cal.com, the output from each to Cal.com and the actual information to make sure it's accurate. Let's look at the pricing for Cal.com. Cowork tells us that the starting price is free, the mid-tier is 12-15 a month and the top tier is 37 a month. Let's see if that's accurate. Free is correct. 12 to 15 and then 37. So the teams being $12 yearly and then $16 monthly. So Cowork did make that mistake. It's actually 12 to 16. And then for the annual plan, oh, it looks like for the monthly plan for organizations, it's 37 a month. And then yearly, it comes down to 28 a month. So tiny little error there from co-work, but not a deal breaker. Again, not a big deal at the end of the day. Now, if we look at perplexity, let's compare the output for what perplexity had. So they did include the free tier, which is accurate. And then for the team's plan, perplexity said that it costs $12 per user per month. Which is correct if you choose the annual option. And then Perplexity also said that $28 per user per month on the enterprise custom plan, which is correct. So I'd say for the first test, Perplexity definitely is the winner here. It was a better looking document. It followed all of our instructions to a T in terms of being one page, being aesthetically pleasing, being client facing. And I think it did a better job. So let's jump into the second test. So the second thing that we're going to test is I've created a folder with 10 text files that contain fake client intake notes, and these are varied formats. They have a bunch of typos. The structure is inconsistent. The goal of what we're going to test here is can each tool read these 10 messy files and extract structured data from each one and then output that data into a clean spreadsheet. I'm going to pause, I'm going to come back when I've uploaded the files to each tool, and then we're going to run the test and see what it gives us. This is the exact prompt that we're using here. All right, now that both tools are done running their analysis, let's dive in and score these against each other. Now, something to keep in mind here, because Perplexity Computer is cloud-based and it doesn't run off your local machine, the difference between the two prompts in this case is I actually had to upload each of the files here to the prompt. Versus in Claude Cowork, with Cowork being attached to your machine, Cowork was able to go into that folder on my computer and pull the files directly from my computer. So not a big deal either way. It's just a key difference between the two tools. Perplexity is cloud-based and Cowork can run off of and modify the files inside your computer. Let's look at the output that it gave us. This first one that we're going to look at here is the Perplexity output. This is the spreadsheet that Perplexity put together based on that fake client data that remember had a bunch of inconsistencies and typos and whatnot. Then here is the one that Cowork gave us. As far as just again, subjectively speaking, these look pretty similar. Personally, I like the look of this one from Perplexity a little better. I just think it's more aesthetically pleasing. It looks a little cleaner. And the one from Cowork, Still gets the job done. Again, no big issues there. Now, how are we going to score these? We're going to check the spreadsheet against one of the original files. We're not going to look at all 10, we're just going to look at one of them. I copied one of the original files into a Google Doc here so that we can look at them objectively. The one that we're going to look at here and compare, make sure that it got right, is the information from a fake client notice, Sophia Martinez. If we find her here, we see that she is the first one. So Perplexity said that her name is Sophia Martinez. Her business name is Sophia's Bakery, which is a bakery and food service business. Let's see if that's correct. She owns a bakery. That looks correct. And then what is it called? What is the name of her bakery? Okay, so Perplexity even specified the business name is unnamed. Which is true because here in our example notes, we don't have a business name. Perplexity took the liberty to say, hey, we're just going to call it Sophia's Bakery because we didn't get an explicit name, but we can tell it's a bakery slash food service business. So far, so good. I'll even zoom in here a little further so that you guys can see. All right, so right now it's saying that based on the notes that we gave it, the primary bottleneck is that they have an outdated website with no online ordering, reliant on word of mouth and a small Instagram following of roughly 200 followers for customer acquisition. Is that true? So it looks like she requested a website redesign. She has a Wix site that says that it looks outdated and cheap. And she wants something more professional with online ordering. And here towards the bottom is where she talks about her bottleneck. So she gets most of her customers through word of mouth and Instagram and has about 200 followers on Instagram. So again, it looks like Perplexity perfectly identified the primary bottleneck for this particular fake client. Perplexity is estimating her wasted hours per week at five to eight hours per week. And again, it's pretty hard to say whether or not that's accurate. Perplexity pretty much just had to make a gut feeling, a gut call on whether or not that was accurate. But if I'm a baker and I have no online ordering system and I'm just taking orders through word of mouth and through referrals and a few through Instagram, five to eight hours per week is probably accurate as far as the amount of time that I could get back if I implemented a professional website and a professional ordering system. And then lastly, we asked Perplexity to go and recommend an AI tool that she could use to fix that bottleneck And it recommended ChatGPT for generating social media captions, product descriptions and email drafts to boost her online presence. Now, that's a pretty basic recommendation. I don't love it. I don't really like using ChatGPT for anything these days, but at the end of the day, not terrible. So let's see how Claude Cowork's output compares. If we jump into Cowork. Again, we're looking at Sophia Martinez, so it did the same thing as perplexity. It said it just called it Sophia's Bakery because it didn't have a name. It recognized it as a bakery slash food service business type, which is correct. Then it identified the same primary bottleneck it looks like as Perplexity, outdated website with no online ordering, reliant on word of mouth and a small Instagram following for customer acquisition. It looks like the output here for this column for both Perplexity and Cowork were virtually identical. I'd be curious to know if Perplexity actually used Opus 4.6 as the model to do This exercise, because perplexity will actually choose the model that it thinks is best for the job whenever you give it a task, which is cool because you're not tied to just the anthropic models if you were just using Cowork, for example. Again, if we look a little closer, Cowork estimated her wasted hours per week to be about five hours per week. I feel like that's probably more in line. Perplexity gave a range, Cowork just gave a flat number. I like that Cowork was a little more conservative on the lower end, which I think is great. Now, this is where the two tools differ and this is where I think Cowork takes the edge for this specific exercise. So again, the primary bottleneck that both tools identified is that, hey, Sophia's Bakery has an outdated website with no online ordering. Now, Perplexity, even though it identified the website as the bottleneck, it recommended ChatGPT for generating social media content as the tool recommendation. Where Cowork actually looked at the bottleneck and was like, well, hey, a bad website is your bottleneck. So you should use an AI website builder to generate quickly a modern website with integrated online ordering. So in my opinion, Cowork for this specific exercise takes the cake because this tool recommendation is a lot more relevant to her specific bottleneck. Right. Social media content isn't necessarily her bottleneck, which Perplexity seemed to think it was based on the tool recommendation. So overall for exercise two, I think Cowork takes the cake specifically because it recommended a better tool for the job, which is going to help her actually solve her problem. But subjectively speaking, I think that Perplexity's output looks better. More aesthetically pleasing, but again, that's not what we're measuring here. We're measuring the actual effectiveness of the tools. Now, lastly, we're going to test. One more use case here, we are going to test whether each tool can research and then build a client deliverable and deliver a workflow. We're going to see, can they research real tools? Can they go out and find real user reviews? Then can they produce a polished PDF report all in one shot? We're going to give them both the same prompts as always, and I'm going to come back when they're done with their analysis. All right, now that both Cowork and Perplexity Computer are done with their research, so what we asked them to do is research three free or low-cost AI tools that help solopreneurs automate invoice follow-up. We wanted them to find the name, the pricing, one key feature, and one real user review, and then compile all the research into a one-page PDF recommendation report with a professional header, a comparison table, and a best for recommendation at the bottom. So this is what we got out of Perplexity, and I will zoom in so that you guys can see it better. So again, we asked for a professional header, which is what we've got. We asked for a comparison table and a best for recommendation at the bottom. So that's exactly what we've got is a comparison table here and a best for recommendation here at the bottom. And one of the things we asked for too is a real user review. Now, these reviews look legitimate, right? I mean, there's nothing to tell me that these were hallucinated or false. But what I like about what Perplexity Computer gave us is it actually gave us sources at the bottom. So it gave us sources not only for what it put inside the comparison table, but sources for the actual reviews as well. So I mean, just again, subjectively speaking, this output from perplexity looks really good. I'm a big fan of it. I think that, again, it's just a good looking output. It gave us three pretty common options here. Let's see what Cowork gave us. So Perplexity gave us Zoho Invoice, Invoice Ninja, and Invoice Sherpa. And Cowork gave us Zoho Invoice, FreshBooks, and Invoice Sherpa. So two of the three recommendations were the same across both tools. And for the Cowork output, it looks like it followed our rules to a T as well. It was a concise one pager, which is what we asked for. It has a header, it has a comparison table, and it has a best for table as well. Now, the only thing that Cowork leaves out, and keep in mind, we didn't necessarily ask it for this, but it was just a nice touch from Perplexity Computer. It did not cite its sources with specific links. So here in the footer, remember in the footer of the perplexity output, we have the exact sources and links to the sources including the reviews, Cowork in the footer includes report prepared March 2026. Pricing and features may change. Verify on each tool's website before purchasing. And then it names the sources, G2, Captera, GitApp, NerdWallet, SMBGuide, and Research.com, but it doesn't link to the sources. Now, again, we didn't ask it to link to the sources, so you can't necessarily fault it for that, but I like that Perplexity Computer went above and beyond and linked to the sources as well. Again, both outputs here very similar, very strong. I would say this third exercise here is a tie. People are probably wondering, okay, well, what's the verdict, right? Perplexity won the first exercise, Cowork won the second one, and then they tied the third. What is my recommendation? And my honest recommendation is that, well, one thing you gotta consider, right, is Cowork, you can access for $20 a month on any Claude paid plan. You can use Cowork, and that starts at $20 a month. Perplexity Computer is only available on the Perplexity Max plan, which is $200 per month. So is Perplexity worth using if you're going to pay 10 times more than you would pay for Cowork? My honest opinion right now is no, right? If you can pay $20 a month to access one of these tools, I would go with Cowork. I don't think Perplexity Computer is yet worth paying $200 a month for. Does it feel better? Does it look a little better overall? I'd say slightly. I'd say it's maybe 5% to 10% better than Cowork right now, just from my initial testing. And I think that's mainly because Perplexity Computer has access to all of the models out there. You're not tied to just the anthropic models. And the fact that it's cloud-based, it can run even when your computer's off or when you're away. Whereas for Cowork to run, it has to be running on your local machine and your computer has to be on. So Perplexity Computer is a slightly better product, but is it worth paying 10 times more for? The answer is no. So my pick is go with Claude Cowork. So I hope you guys enjoyed this comparison. This was my honest opinion after testing Cowork for the last month, month and a half, and Perplexity Computer for the last couple of days. They're both great tools. I'm excited to see what Anthropic does with Cowork, what Perplexity does more with computer. And if you guys like these kind of videos and you like these kind of episodes, be sure to subscribe for more. I'll be back next week.

This transcript page is part of the Billion Dollar Sellers Content Hub. Explore more content →

Stay Updated

Subscribe to our newsletter to receive updates on new insights and Amazon selling strategies.