One reason cloud performance monitoring is so critical is that cloud provider service is so nebulous. "Performance" to cloud providers typically means only availability; and even availability is only loosely guaranteed. For Amazon Web Services, as an example, unavailability means no connectivity at all during a five-minute period; if your user has a lousy, erratic, miserably slow connection, as far as Amazon is concerned, they've delivered. And availability means availability when it leaves Amazon's door; whether or not anything actually reaches your user is not Amazon's problem (regardless of their choices for ISP and connectivity). Oh, and the burden of proof is on you, the customer. For all intents and purposes, Amazon is not even checking to see if you even have service.
This is not to dump solely on Amazon. The same guarantees, or lack thereof, are typical of many cloud providers. In addition to the caveats above, scheduled and emergency downtime is excluded from availability guarantees; penalties for unavailability are minimal, and certainly not commensurate with potential business damages; and any other kind of performance is not included in the service level agreement.
An ideal cloud-client working relationship includes substantial SLAs, external monitoring of SLA parameters that is visible to both provider and client, and meaningful recourse if the service falls short. In lieu of this ideal, however, the onus is on the enterprise to put cloud monitoring and measurement in place and to hold their provider accountable – so they can either get the service level they need, or switch to a better provider.