PO

Payoneer Opens Glilot Office, Hiring Dozens to Boost AI Capabilities

Published on:

This is either a smart bet on the future of fintech — or a very expensive way to say “we’re doing AI now” without proving much yet.

Payoneer just announced it’s opening new offices in Glilot and plans to hire dozens of employees in Israel. The public story is pretty clear: they want to boost their AI capabilities and “transform into an AI-native organization.” Hiring is supposed to focus on product development, AI engineering, data, compliance, and operations. In other words, not just a couple of researchers in a corner. They’re building a hub that touches the parts of the company that actually run the business.

I don’t think this is meaningless. But I also don’t think it’s automatically good.

Because “AI-native” is one of those phrases that sounds decisive and modern, but it can mean anything from “we rebuilt our core systems” to “we added a chatbot and renamed a team.” Companies love the vibe of it. Investors like it. Job candidates like it. Customers are supposed to feel reassured that the company won’t fall behind. The problem is that the hard part of AI isn’t hiring. It’s what you do with those people once the press release glow fades.

If you’re Payoneer, the temptation is obvious: you operate in a world full of friction. Payments across borders, risk controls, compliance checks, customer support that spans time zones and languages — all of it is expensive, messy, and easy to mess up. AI can help. It can make teams faster. It can reduce manual work. It can catch patterns humans miss.

But the same “help” can also turn into the kind of quiet damage that doesn’t show up until customers are furious.

Imagine you’re a small business owner using Payoneer to get paid by overseas clients. One month, something in an automated system flags your account. You’re suddenly stuck. You need a human to look at it, because rent is due and your supplier doesn’t care about an algorithm. If Payoneer uses AI to speed up decisions but doesn’t invest equally in clear appeals and responsive support, the customer experience becomes a coin flip. Fast when it works, brutal when it doesn’t.

Now imagine you’re a compliance person inside the company. You’re told AI will help you spot risk earlier and reduce bad activity. Great. But you’re also on the hook when the system blocks the wrong people, or misses the right ones. If leadership treats compliance hiring as a checkbox while pushing AI deployment aggressively, that’s not innovation — it’s setting up a blame game.

This is why the office and hiring news matters more than it looks. When a company says it’s building an “innovation hub,” it’s choosing what kind of organization it wants to be. Israel has deep talent in engineering and security and data work, and it’s not surprising to see a global fintech invest there. If Payoneer is serious, this could be a real advantage: not just new features, but better decisions, tighter risk controls, smarter product design, and faster cycles.

Still, there’s a quieter truth here: AI projects inside real companies fail all the time, and not because the engineers aren’t good. They fail because incentives are weird. Leaders want a story. Teams want to ship. Legal and compliance want caution. Customer support gets the fallout. And customers just want the basics to work every time.

Hiring dozens of people across product, AI, data, compliance, and operations sounds like they at least understand that AI touches everything. That’s a positive sign. It suggests they’re not treating AI as a side project. But it also raises the stakes. Once you’ve built teams and branded yourself “AI-native,” you’re going to feel pressure to prove it. And pressure is where rushed automation decisions come from.

There’s also a broader consequence that people don’t say out loud: “AI-native” often becomes code for “we want to run leaner.” Maybe that means better margins. Maybe it means fewer repetitive tasks. Or maybe it means fewer humans available when things go wrong. If Payoneer uses AI to reduce headcount needs elsewhere, customers may get faster self-serve tools and slower human help. Some readers will say that’s fine — most people don’t need a human most of the time. Others will say that in finance, the worst day is the day you need a human, and that’s exactly when the system tends to fail you.

And then there’s the talent angle. A new office and dozens of roles can be great for local hiring, especially in a market where skilled people want stable, global companies. But it can also intensify competition and pull more talent into “AI everything” even when the work is mostly process automation with a new label. That’s not a moral failure. It’s just the reality of how hype reshapes careers.

I’m not against this move. I actually think fintech needs to get smarter fast, and AI can be a real tool if it’s used with restraint and accountability. But I don’t buy the language on faith. “AI-native” should mean customers notice fewer false freezes, fewer pointless back-and-forths, clearer decisions, and faster resolution when something breaks — not just flashier features.

So here’s the real test: when Payoneer builds this AI hub and starts shipping changes, will they use AI mainly to protect and serve customers better, or mainly to push decisions faster and cheaper even when the edge cases hurt people?