|
Authored by: symbolset on Saturday, February 23 2013 @ 07:20 AM EST |
As we watch this event unfold Azure storage is down for 12 hours and so all
the sites that use Azure for things that require any level of security. But
it's the upstroke of a marketing blitz designed to position Azure as "enterprise
class" especially as regards to cloud storage so there are a lot of regular
reporting services that are going to be carrying not this story, but the "Azure storage is ready for enterprise" story with canned press
releases only barely varying from each other. Over the next few days the
least capable mouthpieces for Microsoft will reveal themselves as utterly
incompetent. The usual accounts will be granting the usual praise in the
comments as well. That's a beautiful thing because it identifies the "not
reliable" sources of information.
The root cause of the failure is that they
simply forgot to renew their encryption certificate for secure communications.
That is not going to inspire confidence in their "enterprise class" abilities.
They weren't hacked. Some hardware didn't fail in an amazing new way. There
wasn't a blackout, earthquake, flood, fire, volcano or storm. It turns out
there was only one guy responsible for making sure this minor task got done -
and he probably forgot to do it before going on sabbatical or leave or
something. His sticky note fell off and got lost behind the desk. He didn't
have a backup. And this lack on his part and a systemic lack of oversight of
system critical and quite regularly scheduled upkeep allowed their entire global
cloud storage service to go down - including all of the services for themselves
and their clients - worldwide. It does not inspire confidence.
This
immediately follows a week-long outage of their cloud database service, and just over a
year since a quite long outage of management services over their inability to
deal with leap years for all of a leap day, as if that
was an unexpected event. It will be interesting to see what they claim for
uptime figures in the coming year. Any reasonable enterprise is going to call
that "one nine" of uptime. That's not enterprise class. It's not business
class. It's not even consumer class: would you count on your car if it wouldn't
do what you needed it to do at least 99 times out of a hundred ("two nines")?
Your cell phone even?
Let's not even mention the Microsoft Danger
data loss debacle where after over a month Microsoft admitted that data for
millions of their own cloud customers was lost forever, and that entire cloud
platform abandoned because a key executive decided to not wait for a backup to
complete before upgrading the single SAN the cloud data was backed
by.
Notably though outages on Microsoft properties are remarkably few this
time. XBox live seems to be the biggest one. They must not be using their own
cloud a lot for some reason. Maybe their in-house talent knows more about
what's going on here than their salesforce does and their evangelists do.
[ Reply to This | Parent | # ]
|
|
Authored by: artp on Saturday, February 23 2013 @ 11:58 AM EST |
It is the opinion of The Register that to have a
core service
fail in every data center across the world
simultaneously is an extremely bad
thing to happen to a cloud
provider.
Couldn't have said it
better myself. --- Userfriendly on WGA server outage:
When you're chained to an oar you don't think you should go down when the galley
sinks ? [ Reply to This | Parent | # ]
|
|
Authored by: albert on Saturday, February 23 2013 @ 01:03 PM EST |
Is it management incompetence?
Is it technical incompetence?
Renewing an SSL certificate sounds like something a secretary, or a janitor,
could do. Perhaps this wouldn't have happened if a secretary was tasked with
it.
I'm sure there are _some_ technically competent people at MS, but I do still
wonder about the H1Bs. Are MS getting what they pay for?
I think things like this happen because of the workplace culture, about folks
'putting in their time', but not really motivated. If you feel like a drone,
you start to act like one.
It indicates to me that MS just really isn't ready for the enterprise cloud
business. To allow a single point of failure to exist in a mission-critical
system is inexcusable.
I used to work for a very large multinational. Some of our customers had
businesses that could lose > $20,000* per minute of downtime. Fortunately,
we had great products and a dedicated staff. That's the only way to do
business.
You need great hardware, great software, and dedicated people. I think MS has
one out of three.
* adjusted to 2013 US dollars.[ Reply to This | Parent | # ]
|
|
Authored by: Anonymous on Saturday, February 23 2013 @ 09:01 PM EST |
The administrator of the certificates makes an oops and everything shuts
down.
Kinda leaves you wondering what'll happen to all those computers
that end up with UEFI and Microsoft makes a certificate ooops.
RAS[ Reply to This | Parent | # ]
|
|
|
|
|