Skip to main content

Toward public cloud

We are now living in the second decade of the 21st century and having data/services online is not an advantage anymore, but not having it is a disavantage. We used to deploy servers in data centers to make sure our customers have permanent access to our services or to give the sensation that we were always online.

Our internal IT infrastructure was changing step by step and the recurrent question used to be: should we based everything around Microsoft to minimise the products' compatibility or should we take the best of every product and support their interaction ourselves?
We were living with the philosophy of IT perimeters delimited by security devices even if users were spending their time collaborating with others, sharing their documents by emails located somewhere. Somewhere connected.

Today, everything looks different due to our understanding of news trends and the near future:
  • the soon to be soaring energy price,
  • the unavoidable trend of collaboration pushed by social networks and the behaviour of generation Y,
  • the old distribution system, like emails, are dying and people communicate on their social tool board,
  • the distributed models of data storage related to the end user's needs.
Consequently,
  • while the cost of hardware is constantly droping, optimising the infrastructure operational cost is a mandatory survival concern,
  • while de-perimeterisation has not been oriented, at least the value of data is now understood, identity and access control are coming next,
  • while the data storage reliability is transparently managed by multi nodes systems, traditional ACID databases have new competitors.
What is the trend regarding system management?
  • Computing resources never get used in small companies : 10-20%,
  • It's only by using huge data centres that the costs will be minimised,
  • Using optimisation by mechanisms like auctioning will be really efficient.
Resource optimisation:
  • Private cloud (belonging to one company) cannot be a scalable solution and enterprise data will move to the public one. It does not mean the enterprise's data is public but the infrastructures are shared among many companies.
Localisation:
  • Cloud regionalisation could be one solution: OK, I may be biased due to my current professional security activity. But feeling that at least a part of my data is not far from home could be less a subject of fear.
  • Also, this is the only solution for software, hardware vendors and IT staff to stay alive, to justify their value.
Now, another huge issue is the way to manage data: so let's talk next about the Nosql movement.

Sources :
Perspectives: Perspectives - Using a Market Economy
Informationweek: Private Clouds Are A Fix, Not The Future
Elasticvapor: ElasticVapor :Life in the Cloud: Amazon EC2's Greatest Threat is Cloud Regionalization

Comments

Popular posts from this blog

Lancement de ProductTank Lyon

Tout d’abord, bonne année 2020. Je me suis investi ces dernières années dans les communautés/événements CARA Lyon, MiXiTConf, LyonDataScience et CaféDevOps sur Lyon, France. Ces activités m’ont permis de comprendre les experts de ces domaines, d’apprendre quelques notions fondamentales à travers leurs exposés et d'améliorer mes capacités d’échange avec eux. Product Manager depuis plus de 5 ans, je désire améliorer mes réflexes et compétences dans mon domaine. Le faire à travers des rencontres/meetup est ce que je préfère et j’aimerais retrouver la stimulation des communautés dans cette discipline. En cette année 2020, quelques Product Manager Lyonnais, lançons, le meetup ProductTankLyon à Lyon, France. Le réseau ProductTank compte plus de 150 meetup dans le monde et profite des conférences, blog et podcast MindTheProduct. Inscrivez-vous ici , si vous voulez vous joindre à nous.

Learning about Data Science?

This is the end of a beautiful summer, and also one of the warmer recorded in France. I’m continuing my journey in the product management world and today I’m living in the product marketing one too. I will blog about this later. During this first half of this year, I read several articles on big data and started to understand how important the data science discipline is. Being able to define a direction/goal to search, collecting the proper data, then using a collection of techniques to extract something others can’t see - it sounds like magic. Also, when I listened to the Udacity Linear Disgression podcast episode “Hunting the Higgs”, I understood people with these skills can be better at solving a problem than the domain experts themselves. Katie Malone explained that in a competition to solve a particle physics problem, the best results came from machine learning people. Then I read the article about Zenefit on the vision mobile website : “Zenefits is an insurance compan

Read bead experiment

The "Read bead experiment" was created by Dr. Edward Deming and aim to demonstrate the ineffectiveness (sometimes effectiveness) of the various management methods. At the end of the experiment, a statistical graphical tool is used to analyse the experiment's results. By following this exercise, you will understand that actions taken by the people playing the managers are detrimental to the employees, but after the analyses are shown to have no impact on the efficiency of the process . The conclusion proposes ways to properly use performance data in a quality environment in order to achieve continual improvement. The several videos by Fluor Hanford (Steve Prevette) posted below, will help you to understand what is really important in a process. Meet the  company  with its "willing workers", quality control personnel, a data recorder, and a foreman. All wish to produce white beads using a 50 holed paddle, but unfortunately there are bad quality red beads.