Moovies with Boobies vol 2. Let fate decide which movie to watch tonight.
Microsoft's Office 365. Google's Gmail. Facebook's Messenger. These and other modern software products use a cloud-computing approach that lets you work anywhere with a network connection -- you can use your phone, your office PC, your friend's laptop that you borrow for 10 minutes.
Now it looks like Adobe Systems' Lightroom, one of the most popular photo-editing programs around, will follow suit. Lightroom today is locked to a single personal computer, but a job posting indicates Adobe wants to loosen that link.
"Adobe is building the next generation of cloud-first, device-connected products for people who care about photography," Adobe said in the job posting. "To help with this mission, the Lightroom team is looking for a desktop software engineer to help extend and enhance our industry-leading photography platform."
A cloud-first Lightroom would be good news if you're a photographer. It would let you send your main work machine in for repairs, better protect your photo catalogs against equipment theft, loss or fire, and make it possible to retrieve a particular shot when traveling without that beefy external drive that houses your photo catalog.
But the job post also shows how hard it is to move complex software into the modern age. In 2012, Adobe moved its software for creative pros like illustrators, photographers and moviemakers to a subscription model called the Creative Cloud. Paying $50 a month gets you access to the whole shebang, and $10 a month gets you Photoshop and Lightroom. But only gradually has Adobe added the cloud part of the Creative Cloud.
Initial steps toward cloud
First came the ability to synchronize files, then the ability to share libraries of design elements, then the ability to browse stock photos you might want to buy from within programs like Photoshop.
For Lightroom specifically, you can synchronize photos between your PC and your mobile devices. That lets you edit photos on your iPhone and take photos you captured with your Android-powered phone over to your PC. A web interface to these synchronized photos also is steadily improving.
But fundamentally you're still tied to your PC and its master catalog of photos. Today's Lightroom sync is useful but the program remains attached to its one-PC roots.
Adobe didn't comment beyond confirming that it's hiring for its Lightroom team. However, Tom Hogarty, director of product management for Lightroom, has told me in earlier conversations that the company has wanted to embrace cloud computing with Lightroom.
Why not sooner?
There's a good reason Adobe hasn't done this already: Photo and video files are big and therefore slow to transfer over the network. Even with a superfast broadband network like Google Fiber, with 1-gigabit-per-second speeds, it would take more than 4 seconds to download a 30MB photo file, and Lightroom users often deal with dozens of photos at a time. Videos, panoramas and high dynamic range (HDR) images make file sizes dramatically larger, too.
One way to build a "cloud-first" Lightroom that wouldn't make you swear at your poky network would be to store the primary copy of the catalog on an internet service but also keep a copy on your own PC. You'd get fast access to files; then Lightroom could synchronize your photos in the background.
Transferring lots of photos takes time, to be sure, but once the originals are uploaded, Lightroom needs to change only a small amount of accompanying data for each photo to record things like editing changes, titles and captions.
An online photo library could be expensive, though it's possible Adobe could make it an add-on option. Adobe's Creative Cloud subscription offers 20GB of synchronized data, but my archive of 80,000 photos and videos is 1.6 terabytes -- bigger than most archives, probably, but nothing unusual for a photo enthusiast or pro.
Apple charges $20 per month for 2TB of iCloud storage, and Google Drive costs $10 per month for 1TB or $100 per month for 10TB. Even if Adobe opens up to partnerships with a lower-cost online storage service like Backblaze B2, 1.6TB of storage would cost about $8 per month.
But cloud computing brings real benefits, too, as anyone who ever left a laptop in a taxi or airport can attest. However it's done, modernized Lightroom would be an important step.
Simply put, cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, and more—over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you’re billed for water or electricity at home.
Still foggy on how cloud computing works and what it’s for? This beginner’s guide is designed to demystify basic cloud computing jargon and concepts and quickly bring you up to speed.
Uses of cloud computing
You’re probably using cloud computing right now, even if you don’t realize it. If you use an online service to send email, edit documents, watch movies or TV, listen to music, play games, or store pictures and other files, it’s likely that cloud computing is making it all possible behind the scenes. The first cloud computing services are barely a decade old, but already a variety of organizations—from tiny startups to global corporations, government agencies to non-profits—are embracing the technology for all sorts of reasons. Here are a few of the things you can do with the cloud:
* Create new apps and services
* Store, back up, and recover data
* Host websites and blogs
* Stream audio and video
* Deliver software on demand
* Analyze data for patterns and make predictions
Top benefits of cloud computing
Cloud computing is a big shift from the traditional way businesses think about IT resources. What is it about cloud computing? Why is cloud computing so popular? Here are 6 common reasons organizations are turning to cloud computing services:
Cloud computing eliminates the capital expense of buying hardware and software and setting up and running on-site datacenters—the racks of servers, the round-the-clock electricity for power and cooling, the IT experts for managing the infrastructure. It adds up fast.
Most cloud computing services are provided self service and on demand, so even vast amounts of computing resources can be provisioned in minutes, typically with just a few mouse clicks, giving businesses a lot of flexibility and taking the pressure off capacity planning.
3. Global scale
The benefits of cloud computing services include the ability to scale elastically. In cloud speak, that means delivering the right amount of IT resources—for example, more or less computing power, storage, bandwidth—right when its needed, and from the right geographic location.
On-site datacenters typically require a lot of “racking and stacking”—hardware set up, software patching, and other time-consuming IT management chores. Cloud computing removes the need for many of these tasks, so IT teams can spend time on achieving more important business goals.
The biggest cloud computing services run on a worldwide network of secure datacenters, which are regularly upgraded to the latest generation of fast and efficient computing hardware. This offers several benefits over a single corporate datacenter, including reduced network latency for applications and greater economies of scale.
Cloud computing makes data backup, disaster recovery, and business continuity easier and less expensive, because data can be mirrored at multiple redundant sites on the cloud provider’s network.
Types of cloud services: IaaS, PaaS, SaaS
Most cloud computing services fall into three broad categories: infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (Saas). These are sometimes called the cloud computing stack, because they build on top of one another. Knowing what they are and how they’re different makes it easier to accomplish your business goals.
The most basic category of cloud computing services. With IaaS, you rent IT infrastructure—servers and virtual machines (VMs), storage, networks, operating systems—from a cloud provider on a pay-as-you-go basis. To learn more, see What is IaaS?
Platform as a service (PaaS)
Platform-as-a-service (PaaS) refers to cloud computing services that supply an on-demand environment for developing, testing, delivering, and managing software applications. PaaS is designed to make it easier for developers to quickly create web or mobile apps, without worrying about setting up or managing the underlying infrastructure of servers, storage, network, and databases needed for development. To learn more, see What is PaaS?
Software as a service (SaaS)
Software-as-a-service (SaaS) is a method for delivering software applications over the Internet, on demand and typically on a subscription basis. With SaaS, cloud providers host and manage the software application and underlying infrastructure, and handle any maintenance, like software upgrades and security patching. Users connect to the application over the Internet, usually with a web browser on their phone, tablet, or PC. To learn more, see What is SaaS?
Types of cloud deployments: public, private, hybrid
Not all clouds are the same. There are three different ways to deploy cloud computing resources: public cloud, private cloud, and hybrid cloud.
Public clouds are owned and operated by a third-party cloud service provider, which deliver their computing resources like servers and storage over the Internet. Microsoft Azure is an example of a public cloud. With a public cloud, all hardware, software, and other supporting infrastructure is owned and managed by the cloud provider. You access these services and manage your account using a web browser.
A private cloud refers to cloud computing resources used exclusively by a single business or organization. A private cloud can be physically located on the company’s on-site datacenter. Some companies also pay third-party service providers to host their private cloud. A private cloud is one in which the services and infrastructure are maintained on a private network.
Hybrid clouds combine public and private clouds, bound together by technology that allows data and applications to be shared between them. By allowing data and applications to move between private and public clouds, hybrid cloud gives businesses greater flexibility and more deployment options.
How cloud computing works
Cloud computing services all work a little differently, depending on the provider. But many provide a friendly, browser-based dashboard that makes it easier for IT professionals and developers to order resources and manage their accounts. Some cloud computing services are also designed to work with REST APIs and a command-line interface (CLI), giving developers multiple options.
Microsoft and cloud computing
Microsoft is a leading global provider of cloud computing services for businesses of all sizes. To learn more about our cloud platform, Microsoft Azure, and how it compares to other cloud providers, see What is Azure? and Azure vs. AWS.
What is Virtualization?
When individuals discuss virtualization, they’re typically alluding to server virtualization, which means partitioning one physical server into a few virtual servers, or machines. Each virtual machine can connect autonomously with different gadgets, applications, information and clients as if it were a different physical asset.
Different virtual machines can run different working systems and numerous applications while sharing the assets of a solitary physical PC. What’s more, because each virtual machine is detached from other virtualized machines, if one accidents, it doesn’t influence the others.
Hypervisor programming is the mystery sauce that makes virtualization conceivable. This product, otherwise called a virtualization administrator, sits between the hardware and the working system, and decouples the working system and applications from the hardware. The hypervisor allocates the measure of access that the working systems and applications have with the processor and other hardware assets, for example, memory and plate info/yield.
Notwithstanding utilizing virtualization innovation to segment one machine into a few virtual machines, you can likewise utilize virtualization answers for join various physical assets into a solitary virtual asset. A decent sample of this is capacity virtualization, where various system stockpiling assets are pooled into what shows up as a solitary stockpiling gadget for less demanding and more productive administration of these assets. Different sorts of virtualization you may find out about include:
System virtualization parts accessible data transmission in a system into autonomous channels that can be doled out to specific servers or gadgets.
Application virtualization isolates applications from the hardware and the working system, placing them in a compartment that can be moved without disturbing different systems.
Desktop virtualization empowers an incorporated server to convey and oversee individualized desktops remotely. This gives clients a full customer ordeal, yet gives it a chance to staff procurement, oversee, overhaul and fix them for all intents and purposes, rather than physically.
Virtualization was initially presented in the 60s by IBM to support use of substantial, costly centralized server systems by partitioning them into intelligent, separate virtual machines that could run different applications and procedures in the meantime. In the 80s and 90s, this midway shared centralized computer model offered path to a conveyed, customer server computing model, in which some ease x86 servers and desktops autonomously run specific applications.
While virtualization blurred from the spotlight for some time, it is presently one of the most sweltering patterns in the business once more, as associations expect to expand the use, adaptability and expense adequacy in an appropriated computing environment.Citrix, VMWare, microsoft, IBM, RedHat and numerous different sellers offer virtualization arrangements.
What virtualization isn’t:
Virtualization isn’t cloud computing
Virtualization is abstracting computing capacities into a sensible situation that seems perfect for that capacity that may be altogether different than the real physical environment. This exchanges computing and different assets for adaptability, versatility, execution, dependability or some other necessity. In my perspective, this separates to seven layers of innovation, each of which contains a few different sections.
Distributed computing, then again, is a conveyance and utilization model permitting associations to buy access to applications, improvement and sending stages and either virtual or physical servers as an administration and on a by-the-utilization model.
While virtualization innovation is liable to be being used where the computing administrations start, yet it is not totally required.
Saying distributed computing equivalents virtualization is a touch like stating all autos equivalent fuel infusion systems because a few vehicles utilization fuel infusion as an approach to convey fuel to the motor
A demo project at this week’s Open Networking Summit aims to pave the way for implementing next-generation networking technologies in telecom carriers’ central offices, using open standards software and commodity hardware to replace proprietary, fragmented systems.
Advertise: Let Reverse phone lookup to find out who just called you!
Called the Central Office Re-architected as Datacenter (CORD) proof-of-concept (POC), the effort is being led by ON.Lab and its Open Networking Operating System (ONOS) project in cooperation with AT&T as the service provider, with PMC-Sierra and Sckipio providing merchant silicon hardware components.
The concept being proved involves telecom and cable carriers leveraging new-agesoftware-defined networking (SDN) and network functions virtualization (NFV) technologies to turn their carrier functionality into hosted workloads running on white-box, or commodity, hardware. The participants said this approach effectively provides Infrastructure-as-a-Service (IaaS) and networking services as applications that run on the commodity hardware, capitalizing on the agility and scale provided by cloud computing.
“CORD enables service providers to build an underlying common infrastructure with white boxes using ONOS (carrier-grade open source SDN control plane), OpenStack (virtual infrastructure management) and XOS (an open source service orchestration/management platform built on OpenStack) with a diversity of organizations building the services and solutions that ride above,” ON.Lab said in a news release. “In effect, this common infrastructure replaces the fragmented, non-commodity one in today’s central offices where each site hosts more than 300 unique deployed appliances, each requiring a physical install and specialized management.”
The Cord Setup
[Click on image for larger view.]The Cord Setup (source: ON.Lab ONOS Project).
The CORD project aims to highlight the benefits of the approach to three types of end users — service providers, subscribers and third-party providers — listing the key capabilities for each.
For service providers, key capabilities include:
SDN control, orchestration and management with ONOS, OpenStack and XOS on commodity infrastructure.
An open high-performance leaf-spine fabric.
OpenFlow-enabled PON OLT MAC hardware enabling virtualization of the traditional Optical Line Termination (OLT).
OpenFlow-enabled G.fast distribution point unit (DPU).
Access-as-Service, Subscriber-as-a-Service, Internet-as-a-Service, Caching/Content Delivery-as-a-Service, virtualized functions including firewall, URL filtering, parental control and Broadband Network Gateway (BNG).
Service provider portal for intuitive provisioning, management and monitoring of infrastructure and services.
For subscribers, they are:
Simple customer premises equipment (CPE) that replaces existing complex CPEs and can be managed by ONOS.
Internet, firewall, parental control services.
Subscriber portal for signing up for and managing services.
For third-party providers, they are:
Content delivery (caching) for their own content in the service provider network.
Third-party provider portal for signing up for and managing services.
“SDN and NFV are speeding up innovation, as seen in projects like CORD,” said AT&T exec Tom Anschutz. “These technologies create systems that do not need new standards to function and enable new behaviors in software, which decreases development time. Faster development time leads to rapid innovation, something the industry needs to continue satisfying data-hungry customers.”
After the demo at the Open Networking Summit, which concludes today, the CORD project will continue development efforts and proceed to lab trials, with a CORD bundle of software and hardware expected to be made available in a ready-to-use “Pod” to service providers for testing by the end of this year.
ON.Lab also said the software used in the CORD demo will be available to the general public as part of the next ONOS release — called Drake — in late August. ONOS version 1.2, called Cardinal, was unveiled earlier this month as part of the ONOS three-month release cadence.
Companies Peddling For Network Virtualization
Several IT companies are touting that network virtualization is a must-have feature in any enterprise’s IT infrastructure, which can greatly enhance their security as well as operational efficiency.
These companies’ front men are repeatedly saying that people in charge of their companies’ information systems, and their associating security features, who are constantly worried about nasty hackers’ attacks pouncing on them at any moment causing costly damages, should definitely pay heed to virtual networking, which is steadily gaining ground in popularity, though so far has not been as commonly known and utilized as server virtualization that is a smaller component of the all-encompassing network virtualization.
One passionate CEO of a pioneering company in this area, which is making much noise for the promotion of network virtualization, has recently pointed out sharply that, the security industry, as it currently is with all its patchwork components that seems disconnected from each other, just don’t have the ability to offer clients with consistent efficient security solutions that can cope well against the perpetual security threats, or even have enough flexibility to incorporate future upgrades; hence many firms basically are wasting their money on cumbersome inept security systems that are convoluted and challenging to operate to begin with, let alone doing a good job of the task it’s supposed to be doing.
He suggests that network virtualization is the boon to the various thorny problems facing the IT security industry right now, thanks to its capability of being the common platform for the many diverse tasks and operations, involved in the provision of IT security, to converge upon (ranging from application and data management to people coordination).
Therefore we can therefore for the first time treat security systems as some fully and rationally integrated structures that we can design, as well as improve afterwards, with congruity and efficiency, most importantly allowing us to reach the much needed level of security with smaller budgets. Many people with knowledge of the matter have been trumpeting that network virtualization will be the watershed in the non-stop evolution of the security industry, which will successful build and maintain extremely secure IT networks with true capabilities of data protection.
Many experts have constantly reminded enterprises concerned about their network security that security features need to always be built from the inside at the very first stage of planning and design, and that it’s inefficient and complicated to impose on, or attach required security elements afterwards, to complete, fully functioning networks that have already been designed and set up without those elements in their initial blueprints. Right now only network virtualization can directly integrate security elements into IT networks from the very beginning, creating harmony and smooth inter-operability among the hardware and software and other various components involved in the network.
Cloud Computing Platforms: Cloud Computing with VMware Cloud Technology, Private Cloud, Hybrid Cloud
All Cloud Computing Platforms are Not Equal Cloud computing offers undisputed benefits in terms of agility and cost-effectiveness. But cloud computing platforms are not commodities, with one easily substituted for another. Chances are that some of your workloads can only run on-premises and some can only run in specific proprietary clouds. Once you move a workload to a proprietary cloud, it typically requires rewriting and/or reconfigurations to move back onsite. Luckily you don’t need to re-architect for different public cloud infrastructures. VMware cloud computing services let you run both new and legacy applications in the cloud. You get the best of both worlds: leverage your existing investments while still gaining the agility and cost-effectiveness of a public cloud.
Cloud Technology for New and Legacy Applications
If your existing infrastructure is built on VMware vSphere, used by more than 80% of businesses worldwide, you can take advantage of a true hybrid cloud solution and extend your data center to the cloud quickly, easily and confidently. Because VMware vCloud Air is built on vSphere, your onsite and offsite IT environments can be connected and integrated, running existing and new applications in exactly the same way. You can get the same performance, security and compliance as you do from your current VMware infrastructure with the same agility, automation and standardization available in the cloud.
Public, Private and Hybrid Clouds
The software-defined data center (SDDC) gives you the basis for building a private, public or hybrid cloud for delivering IT-as-a-Service (ITaaS). The SDDC architecture gives you a common management, orchestration, networking and security model across on-premises and off-premises environments.
Build a vSphere-based private cloud on premises with vCloud Suite.
Extend your vSphere-based data center to a public cloud with VMware vCloud Air or through our extensive ecosystem of vCloud Air Network service providers worldwide.
Manage physical, virtual and hybrid cloud with a management solution built for heterogeneous, hybrid cloud with vRealize Suite.
Unified Management Platform
Policy-driven automation and management in a single unified management platform lets you centrally manage and monitor applications and workloads across heterogeneous infrastructure and hybrid clouds. vRealize Suite lets you increase business speed and agility while maintaining IT control and cost efficiency.
Build a Private Cloud
Leverage the software-defined data center architecture to build and run a vSphere-based private cloud. Deliver virtualized infrastructure services and highly available applications and services.
Calculate Your TCO
Beyond Server Virtualization
Server virtualization slashed CapEx and OpEx costs by more than more than 50%* while expanding business agility. Now you can virtualize the rest of the data center so that all IT services become as cheap and easy to provision and manage as virtual machines. The software-defined data center architecture extends abstraction, pooling and automation to the rest of your data center resources, including compute, network and storage. You can deploy your virtualized infrastructure on any cloud infrastructure and manage it across platforms.
Private Cloud Delivers Highly Available Apps and Services
Leveraging the software-defined data center architecture, a VMware-based private cloud provides the foundation for achieving highly available applications and services with a standardized and consolidated data center. A private cloud also enables secure and compliant IT, intelligent control of IT operations, rapid application provisioning and ongoing governance.
Improve Efficiency with a Private Cloud
Virtualizing servers is just the beginning. Policy-driven operations management in a private cloud reduces costs, both OpEx and CapEx.
Server virtualization with vSphere delivers CapEx and OpEx savings of 40-60%*.
Automated operations management responds to issues before service quality is impacted, increasing utilization and IT productivity.
Capacity planning and optimization identifies idle and over-provisioned VMs so you can optimize virtual machine density, balancing cost and risk through capacity modeling.
Improve Business Agility through IT Agility
The agility of an organization is often tied to the agility of the IT infrastructure. On-demand deployment with automated resource allocation lets you respond rapidly to changing business demands, keeping business users productive and able to respond rapidly to changing business environments.
Full abstraction of compute infrastructure lets you provision applications on any hardware stack and vSphere-based private (or public) clouds.
Automated, policy-driven resource allocation matches resources to business objectives.
Self-service portal and catalog with policy-based lifecycle management reduces provisioning time for infrastructure, platform and desktop-as-a-service.
Control Cost, Availability and Placement of Apps
Deliver the highest levels of availability with the private cloud’s automated business continuity and virtualization-aware security and compliance. vCloud Suite reduces downtime of tier 1 applications by 36%.
Deliver business continuity through application-aware availability and automated fault tolerance.
Protect against threats and configuration drift through virtualization-aware security and compliance.
Move and scale workloads as needed by using a common management, orchestration, security and compliance model across vSphere-based private and public clouds.
Solve IT problems faster with rapid detection and root-cause analysis through operational analytics.
Unified Management Platform
Provision and manage workloads in your private cloud, even to other hypervisors and to non-virtualized environments, with the vRealize Suite.
Provision workloads to other hypervisors, including Microsoft HyperV and KVM.
Choose from on-site or as-a-service solutions.
Automate provisioning with policy-based lifecycle management capabilities.
vCloud Suite lets you build and run a vSphere-based private cloud. It includes:
vSphere: Compute virtualization platform
Site Recovery Manager: Automated disaster recovery
vCloud Networking and Security: Networking and security for virtualized environments
vRealize Automation: Self-service application catalog
vRealize Operations: Performance, capacity and configuration management
vCloud Director: Software-provisioning for software-defined data centers
Extend Your Data Center to the Cloud
VMware vCloud Air and VMware vCloud Air Network Service Provider partners are delivering on the promise of hybrid cloud computing, enabling you to seamlessly and securely extend your data center and applications to the cloud. With cloud services built on the trusted foundation of VMware technology, you can provision new or move existing workloads between onsite data centers or internal private clouds to the public cloud, and back again as needed, creating a true hybrid cloud. VMware provides unprecedented flexibility and choice of cloud services on a local basis with vCloud Air and through the vCloud Air Network--the world's largest network of validated cloud services based on VMware technology.
Try vCloud Air OnDemand today.
VMware vCloud Air Overview
Learn how hybrid cloud can make your business more agile, competitive and cost-effective. (Duration: 2:13)
The Advantages of Hybrid Cloud
When you choose cloud services from VMware and our vCloud Air Network Service Providers partners, you don't have to worry about application compatibility or service provider lock-in often associated with other commodity cloud services. You reduce both risks and costs without requiring additional management tools and infrastructure, reinventing your processes or retraining your existing workforce. The benefits of a VMware-based hybrid cloud platform include:
The ability to write, deploy and manage applications in the cloud the same way you do today, relying on the underlying platform to provide the same level of security, reliability and performance you get from your current VMware infrastructure.
Administration of the entire hybrid infrastructure—data centers and public cloud together—with a "single pane of glass" management framework. Use the same tools, processes and skills you already have.
Quickly deploy workloads to the cloud with the flexibility to move them between your on-premises and off-premises environments as your requirements change.
Global availability and choice of validated cloud services through VMware or our vCloud Air Network service provider ecosystem.
Data sovereignty through local vCloud Air data centers and vCloud Air Network service providers in 102 countries.
Moovies with Boobies vol 2. Let fate decide which movie to watch tonight.