Monday 24 September 2018

Virtual Desktop Infrastructure VDI Vs Remote Desktop Services RDS



Virtual Desktop Infrastructure (VDI):
Virtual Desktop Infrastructure (VDI) is a term used to describe users accessing a full desktop Operating System (OS) environment remotely. The desktop could be a normal PC or a Virtual Machine. VDI is a centralized desktop delivery solution. The concept of VDI is to store and run desktop workloads including a Windows client operating system, applications, and data in a server-based virtual machine (VM) in a data center to allow a user to interact with the desktop presented via Remote Desktop Protocol (RDP).
In a VDI deployment, there are two models, a static or persistent virtual desktop and a dynamic or non-persistent one. In static mode, there is a one-to-one mapping of VMs to users. In a dynamic architecture, on the other hand, there is only one master image of the desktop stored.
Benefits of VDI:
  • Utilization of Same Image.
  • Management of a Single OS Can Reduce Costs.
  • Processing moves from individual workstations to a VDI server.
  • Troubleshooting Problems is Easier.
  • Data is More Secure.
Remote Desktop Services (RDS):
Formerly known as Terminal Services, multiple users share the same OS and applications running in the server, known as the “RD Session Host.” Shared sessions is the way Terminal Services handle thin clients. The user’s machine functions like an input/output (I/O) terminal to the central server. Software installation, configuration and updating is easier to control when the end user’s desktops run in a centralized datacenter, rather than in each end user’s PC. In addition, users can access their desktops from any computer running Remote Desktop Services (RDS).
Benefits of RDS:
  • Single point of maintenance.
  • Install once, use many.
  • Reduced licenses expense.
  • Solid Security.
  • Lower Costs.
How VDI is different from RDS?
VDI is different from RDS in following ways:
  1. In a RDS environment multiple users can access a single environment, which could be customized on a per user basis but resources are not dedicated to a particular user. Whereas, In a VDI environment each user either accesses their own centrally hosted physical PC or VM or they can access a shared VM.
  2. Also, In a VDI environment physical CPU, Memory and Disk capacity can be allocated to particular user which stops one user’s actions affecting other users.
Different providers of VDI and their components:
VMware Horizon View:
VMware View provides remote-desktop capabilities to users using VMware’s virtualization technology. A client desktop operating-system – typically Microsoft Windows 7 or windows 10 – runs within a virtual environment on a server. The VMware View product has a number of components which are required to provide the virtual desktops, including:
  • View Connection Server: It is a software service that acts as a broker for client connections.
  • View Agent: It is a software service that is installed on all guest virtual machines in order to allow them to be managed by View.
  • View Client: It is a software application that communicates with View Connection Server to allow users to connect to their desktops.
  • View Client with Local Mode: It is a version of View Client that is extended to support the local desktop feature, which allows users to download virtual machines and use them on their local systems.
  • View Administrator: It is a Web application that allows View administrators to configure View Connection Server, deploy and manage desktops, control user authentication, initiate and examine system events, and carry out analytical activities.
  • vCenter Server: It is a server that acts as a central administrator and provides the central point for configuring, provisioning, and managing virtual machines in the datacenter.
  • View Composer: It is a software service that is installed on a vCenter server to allow View to rapidly deploy multiple linked-clone desktops from a single centralized base image.
  • View Transfer Server: It is a software service that manages and streamlines data transfers between the datacenter and View desktops.
Citrix XenDesktop:
Citrix XenDesktop is an application and desktop virtualization product that delivers complete Windows desktops and applications across virtual infrastructures developed and sold by Citrix Systems.
The components of Citrix XenDesktop are:
  • Delivery Controller: The Delivery Controller is the central management component of any XenApp or XenDesktop Site. The Controller manages the state of the desktops, starting and stopping them based on demand and administrative configuration.
  • Database: This database stores the data collected and managed by the services that make up the Controller.
  • Virtual Delivery Agent (VDA): It enables the machine to register with the Controller, which in turn allows the machine and the resources it is hosting to be made available to users.
  • StoreFront: StoreFront authenticates users to sites hosting resources and manages stores of desktops and applications that users access.
  • Receiver: It is installed on user devices and other endpoints. It provides on-demand access to Windows, Web, and Software as a Service (SaaS) applications.
  • Studio: Studio is the management console that enables you to configure and manage your deployment, eliminating the need for separate management consoles for managing delivery of applications and desktops.
  • Director: Director is a web-based tool that enables IT support and help desk teams to monitor an environment, troubleshoot issues before they become system-critical, and perform support tasks for end users.
  • License server: License server manages your product licenses.
  • Hypervisor: The hypervisor hosts the virtual machines in your Site. A hypervisor is installed on a host computer dedicated entirely to run the hypervisor and hosting virtual machines.
Closing Thoughts:
Both RDS and VDI are core components of desktop virtualization, and they satisfy specific computing requirements and scenarios with deployment readiness and flexibility. VDI and RDS have peculiarities that adapt to the different needs of a business, but making a choice between them could be difficult for some companies.

Sunday 23 September 2018

VMware vCloud Director – Storage Policies – Part 5

VMware vCloud Director – Storage Policies – Part 5

VMware vCloud Director Storage Policies

Part 5 of the VMware vCloud Director series shows you how to setup tiered vSphere Storage Policies to be used later on in vCloud Director. Storage Policies are setup within the vSphere Web Client.

In the background I have already mounted 4 NFS datastores which will represent 4 tiers of storage:

vcloud6-platinum1

vcloud6-gold1

vcloud6-silver1

vcloud6-bronze1

1. Log into the vSphere Web Client. Before setting up VM Storage Policies we will be setting up some storage tags. On the home screen click on Storage on the right hand side.





2. We will start with tagging the platinum storage. On the right hand side click on Manage – Tags – and click on New Tag





3. Type in a Name and Description to reference the platinum storage tag. Drop down the menu next to Category and select New Category.





4. Type in a Category Name, description and make sure you have On tag per object selected along with Datastore under Associable Object Types. Click Ok.





5. We have now tagged our platinum datastore with the Platinum Storage Tag. Repeat the previous steps to create Tags for your gold, silver and bronze datastores.





6. Browse back to the home screen within the vSphere Web Client. On the right hand side click on VM Storage Policies.





7. Type a Name and Description for the Platinum Storage Policy.





8. A summary screen is displayed explaining what rule-sets are. Click Next.





9. Next to Categories at the top, drop down the menu and select Platinum Storage Category. Tick the box next to Platinum Storage Tag. Click Ok.





10. This Platinum Storage Policy will be satisfied by any storage that is tagged with the Platinum Storage Tag. Meaning if I assign the Platinum Storage Policy to a virtual machine, it will look at what datastores have the Platinum Tag assigned and place the virtual machine there. Click Next.





11. You can click on Compatible to see which storage has been tagged with the Platinum Storage Tag. Storage that contains other tags will be listed under Incompatible Storage.





12. A summary of your settings are displayed. Click Finish.





13. Repeat the last few steps to create a Storage Policy for the Gold, Silver and Bronze Storage Tags.



VMware vCloud Director – NSX Install and Configure – Part 4

VMware vCloud Director – NSX Install and Configure – Part 4


VMware vCloud Director NSX Install and Configure


Part 4 of the VMware vCloud Director series looks at the installation and configuration of VMware NSX. VMware NSX will be providing Layer 2 and Layer 3 network functionality to vCloud Director.


Installing VMware NSX for vCloud Director


1. VMware NSX Manager is provided as a virtual appliance. First up we’ll look at the installation of this virtual appliance. In this demo we’ll be installing the virtual appliance via the vSphere client. Launch the vSphere client and click on the file menu – Deploy OVF Template. Once the wizard launches, select the VMware NSX manager OVA file and click Next.


 


 


2. Some information is displayed about the VMware NSX OVA image, click Next.


 


 


3. Click Accept on the End User License Agreement followed by clicking Next.


 


 


4. Type a name for the VMware NSX manager and select a folder to place the virtual machine. Click Next.


 


 


5. Select a Resource Pool or vApp to place the VMware NSX manager. Click Next.


 


 


6. Select a Datastore to place the VMware NSX Manager virtual machine into. Click Next.


 


 


7. Depending on your storage you may or may not have the option to change the disk type, if you do have the option select the disk type appropriate to your environment. Click Next.


 


 


8. Chose the Network that the VMware NSX Manager will connect to. Click Next.


 


 


9. Enter in passwords for the CLI admin user and for CLI privilege mode. Enter in the FQDN for the hostname (I also setup a forward and reverse DNS entry in my Windows Active Directory DNS Server). Enter an IP address that you wish to assign to the VMware NSX Manager.


 


 


10. Scroll down and enter in the subnet mask and default gateway


 


 


11. Scroll down and enter in your DNS server IP address, domain search list and NTP server settings. Click Next.


 


 


12. You are now presented with a summary screen with all the settings that you’ve previously chose. If you wish to make any changes click on the Back button, otherwise click Finish to begin the deployment.


 


 


 


 


Configuring VMware NSX for vCloud Director


1. Open a browser and browse to the DNS or IP address of your VMware NSX manager. Enter your username and password and login.


 


 


2. Click on View Summary.


 


 


3. A summary of services, CPU, Memory, Storage, IP address, versions and uptime is displayed. Click on Manage


 


 


4. Check your NTP server, timezone and Date/Time to ensure they are correct. You can also use this screen to specify a Syslog Server. Click Network.


 


 


5. This screen displays all your network settings for the VMware NSX controller. Click SSL certificates.


 


 


6. As we can see in the screen shot below, we are using a self-signed SSL certificate generated at the time of installation. You can use this screen to Generate a CSR, submit the request to a trusted Certificate Authority and obtain a signed digital certificate. You can use the Upload PKCS#12 Keystore button to upload the certificate. Click Backups and Restore


 


 


7. Within this window we can schedule a one-time or recurring backup of the VMware NSX manager configuration.


 


 


8. Click on Change next to FTP Server Setting. As we can see in the screen shot below we can either utilize FTP or SFTP for our configuration backups


 


 


9. Under Components – Click on NSX Management Service. Here we will be entering our information for our vCenter Lookup Service and vCenter Server settings.


 


 


10. Click Edit next to Lookup Service. Enter in your Lookup service IP or DNS name and a username and password with administrator rights to your SSO. In a production environment create a new admin account other than administrator and use that account here.


 


 


11. Click Yes to proceed with Trusting the Certificate


 


 


12. The lookup service has now been connected. Next we will setup our vCenter Server connection.


 


 


13. Click Edit next to vCenter Server. Enter in your vCenter Server IP or DNS name and a user with administrator access to your vCenter Server. In a production environment create a new admin account other than administrator and use that account here. Click Ok.


 


 


14. Click Yes to proceed with Trusting the Certificate


 


 


15. The vCenter Server is now successfully connected to the VMware NSX Manager.


 


 


16. Log back into the vSphere web interface if you’re not already. Clicking on the home tab you will notice the new Networking & Security icon under the Inventories row.


 


 


17. Click on Networking & Security and it will bring you to the VMware NSX configuration settings.


 


 


18. Click on Installation and you will see the NSX Manager that we setup previously with IP address 192.168.1.169


 


 


19. Click on the green + under NSX Controller Nodes. From here we will be deploying 3 controllers as this is the recommended minimum for NSX. Within the Add Controller Window we will want to make sure our NSX Manager is selected, select the Datacenter, Cluster or Resource Pool, Datastore, Host and Folder locations. This will be the location where you will be installing your first NSX controller. I have selected NSX as a resource pool and all my controllers will be sitting in my management cluster, not within the vcloud cluster. Next I will select the management network for the controllers. Next we need to setup an IP Pool which will allocate IP addresses to our controllers.


 


 


20. Click Select, Next to IP Pool. You are presented with the Select IP Pool window.


 


 


21. Click New IP Pool. Type in a Name to reference the IP Pool. Enter in the gateway IP address for the management network along with the prefix length (24 for 255.255.255.0). I am going to specify my Windows Active Directory DNS server as Primary DNS and for the DNS suffix my Active Directory Domain vmlab.local


Within Static IP Pool, I will enter the range of IP addresses that I will allocate to my 3 NSX controllers. As you can see in the screen shot below I will allocate 192.168.1.177, 192.168.1.178 and 192.168.179. Click Ok once you have finished


 


 


22. You will now return back to the Select IP Pool screen where you can see that the newly created IP Pool is listed. Clicking on the IP Pool displays the settings off to the left. Click Ok when finished


 


 


23. Lastly we will enter in a password which will be assigned to the CLI of each NSX controller. When you are happy with all the settings click Ok.


 


 


24. The first NSX controller begins deployment as you can see under NSX Controller nodes.


 


 


25. We will move on and install NSX Controller 2. Click the green + under NSX Controller Nodes. I only have 1 VMware ESXi host in my management cluster so I will keep my settings the same, however if you have more than 1 host in your cluster make sure you spread the controllers amongst hosts, datastores and resource pools. Click on Select next to IP Pool and select the previously created IP Pool, this will assign the next available IP address in the pool to this controller. Click Ok to deploy the second controller.


 


 


26. This warning appears due to my lab only having 1 host in my management cluster. However it’s a good reminder in case you forget. Click Yes to continue


 


 


27. We now have the second NSX controller deployed.


 


 


28. Repeat steps 25 and 26 to deploy the third and last controller.


 


 


29. Our next step is to push out NSX to the all hosts in the cluster. Click on Installation on the left hand side, then on the right hand side under Clusters & Hosts, select your vCloud Cluster and under Installation Status click your mouse on the right hand side of Not Installed. A little purple cog will appear and you can drop down a menu and select Install.


 


 


30. A confirmation window appears. Click Yes.


 


 


31. The NSX Agent has been deployed to all my vCloud Cluster ESXi hosts. The agent version is displayed along with Enabled under the Firewall column.


 


 


32. We will now configure our VMware ESXi hosts for VXLAN. Click to the left of Not Configured, under the VXLAN colume and select Configure VXLAN


 


 


33. Select the distributed switch belonging to the vCloud Director cluster, type in the vlan you wish to use exclusively for VXLAN traffic, make the MTU 1600 (ensure your physical switches can support jumbo frames, i.e. larger than 1500 MTU. Check with your switch manufacture on how to configure jumbo frames). For the VMKernel Nic’s that will be allocated to each VMware ESXi host, we will be assigning Static IP’s via an IP Pool. Select Use IP Pool and select New IP Pool.


 


 


34. As you can see in the screen shot below I have given my IP Pool the name VXLAN NIC Pool with a gateway of 192.168.100.1, prefix length of 24 and Static IP Pool of 192.168.100.175 – 192.168.100.176. I have only allocated 2 IPs for this pool as I will only have a maximum of 2 VMware ESXi hosts in this vCloud Director cluster.


 


 


35. Make sure your newly created VXLAN NIC Pool is selected, for the VMKNic Teaming Policy, ensure that the teaming policy that is in use in your network is selected. Click Ok.


 


 


36. We can now see that for our vCloud Director cluster, VXLAN is configured with the green tick.


 


 


37. If we now browse to networking for our vCloudDSSwitch, we can see the VXLAN (vlan 1000) port group created


 


 


38. Clicking on Hosts and Clusters and selecting the VMware ESXi host vcloud6esxi.vmlab.local – Manage – Networking – VMKernel adapter, we can see the VXLAN vmk3 adapter which has an IP address from the IP Pool of 192.168.100.175


 


 


39. Browse back Home – Networking & Security – Installation – Logical Network Preparation – Segment ID, and select Edit. Type in a range between 5000 and 16777215, this will be the amount of VXLAN networks you can create for vCloud Director. For my lab I’ve entered in 5000-7000. Click Ok


 


 


40. The Segment ID Pool is now set. In part 5 we will look at configuring VM Storage Policies in order to classify our tiered Storage.