Theory wise at least very well. Most mid-size workloads - 2TB of in-memory storage is equivalent to 5-20TB of Oracle storage can be run by the biggest single server HANA hardware. HANA works in a way that means it is possible to chain multiple systems together which means that scalability has thus-far been determined by the size of customers' wallets. Whilst SAP talk up "Big Data" quite a lot, HANA currently only scales to the small-end of Big Data referring to the kind of huge datasets that FaceBook or Google have to store - not Terabytes, but rather Petabytes.

Related questions