Thursday 27 December 2018

EMC E20-020 Questions Answers

If the metering tool supports multi-tenancy and integrates with the cloud portal, what should
be considered?


A. Only permit local user access for security reasons
B. Metering tool should not integrate with tenant authentication
C. Metering server access should be allowed for all tenants
D. Role-based access for security reasons

Answer: D

Friday 21 December 2018

EMC E20-020 Questions Answers

What are general design considerations for inter-cloud links?

A. Internet connection, network service provider adjacent to the cloud provider, and application behavior
B. Router and load balancer session synchronization, Internet connection, and network service provideradjacent to the cloud provider
C. Application behavior, router and load balancer session synchronization, and Internet connection
D. Network service provider adjacent to the cloud provider, application behavior, and router and load balancersession synchronization

Answer: B


An architect is designing a cloud. The architect has decided to place the cloud management platform components on a separate set of compute, network, and storage infrastructure from the consumer resources. Why would the architect make this decision?

A. Provide static and dedicated resources to the management applications without affecting the efficiency ofconsumer resource sharing
B. Provide static and dedicated resources to the consumer applications without affecting the efficiency of themanagement applications
C. Provide additional security to the management infrastructure by eliminating exposure to public-facingnetworks
D. Provide separate billing for the management applications for the consumer’s resources

Answer: C

Sunday 5 August 2018

EMC E20-020 Questions Answers

A company wants to build an IaaS cloud to host cloud-native applications. On which areas should a cloud architect focus when gathering requirements for this cloud design?

A. Automation, multi-tenancy, and hardware availability
B. Automation, hardware availability, and policy compliance
C. Hardware availability, policy compliance, and multi-tenancy
D. Policy compliance, automation, and multi-tenancy

Answer: C



Which networking technology is required for application HA in contrast to host HA or operating system HA?


A. Multipath routing
B. Shared storage
C. Load balancing
D. VLANs

Answer: B

Tuesday 27 February 2018

EMC E20-020 Questions Answers

Which authentication and authorization feature is critical for service providers when supporting multi-tenancy?

A. Integration with external authentication services
B. Integration with external directory services
C. Portal authentication services
D. Single sign-on capability

Answer: A


A cloud architect has included a monitoring application in a cloud design to ensure infrastructure performance meets agreed-upon service levels. The application resides on a virtual appliance. The application vendor provides guidance for sizing the appliance.
What will be part of the sizing calculations for the virtual appliance storage?

A. Number of targetsMessage bus subscribers
Data deduplication policies
B. Number of segments
Number of metrics
Data deduplicaiton policies
C. Number of targetsNumber of metrics
Data retention policies
D. Message bus subscribers
Number of users
Data retention policies

Answer: C

Wednesday 27 December 2017

Dell EMC PowerEdge R740xd Server Review

In the spring of 2017, Dell EMC launched the much anticipated update of the PowerEdge line, updating the PowerEdge line to Broadwell's Xeon SP. The update includes the new R740 server family, which includes the conventional R740 and the "extreme disk" version called R740xd, which we will analyze in this review. This powerful server supports a wide range of storage options, extending up to eighteen 3.5 "or thirty-two 2.5" disks for incredible capacity, or up to twenty-four 2.5 "NVMe SSD, if the vertiginous storage I / O It is more than its strong Compute and DRAM are not left in the back seat, with the R740xd supporting up to two Intel Xeon scalable processors with 28 cores each and a maximum memory space of 3TB.There are few applications in which this new server It will not stand out, which is exactly the direction that Dell EMC took when designing this always modular platform.


The PowerEdge R740 servers represent a great performance and storage base in a 2U box. The server can be configured with up to 2 Scalable Intel CPUs and 24 DDR4 DIMMs (or 12 NVDIMM), but where they really shine is in the way they approach storage. While the R740 offers up to 16 storage bays, the xd offers up to 32 2.5 "bays, 24 of which can be NVMe.The R740xd also offers some unique storage arrangements over typical front-loading bays, including bays of central and rear loading to fit all the additional storage space in the same space of 2U.The design allows users to adapt their storage needs to their application by being able to mix NVMe, SSD and HDD in the same chassis, creating levels storage within the chassis The R740xd also supports up to 192 GB of NVDIMM In addition, the R740xd has the ability to boot from internal M.2 SSDs RAID through an additional card, freeing up the most accessible space for load storage Both versions are good for SDS, service providers and VDI, with total storage and NVMe is the key difference, it is also new in the R740 / R740xd the biggest support for GPU or FPGA. Both are compatible with up to three 300W cards or six 150W cards. In this generation, Dell EMC designed the BIOS to automatically record the airflow required by each card and provide an individually adapted airflow through a function called multi-vector cooling.

With every update of any server line, there are new CPUs, more RAM and better storage options and networks. What differentiates many companies, however, is the complete management of the product's life cycle. Within reason, any server with the same hardware specifications will have roughly the same score. The difference, however, quickly becomes apparent with the quality of the hardware, the breadth of the support software and how easy it is for the system to be deployed quickly in a given environment. This is a key area in which Dell EMC stands apart from others in the market. Dell EMC offers users key tools such as LifeCycle Controller, iDRAC, OpenManage Mobile and others. We have taken advantage of many of these tools in our own environment, and again and again, we have been impressed by how simple and mature the platform has become over time.

The new PowerEdge servers have support for software defined storage (SDS) integrated from the beginning, which allows them to use cases such as hyperconverged infrastructure. In its own line of business products, Dell EMC leverages the R740 with pre-built and validated solutions, such as Ready Nodes for ScaleIO or VSAN, as well as the PowerEdge XC line. The R740xd allows configurations that take advantage of all external drive bays for the SDS product itself, keeping the boot segment in an internal m.2 SSD.

The new Dell EMC PowerEdge R740xd is available now and is highly customizable. For this review, we leveraged an individual R740xd with a near top-end configuration, as well as a cluster of 12 R740xds with a more modest configuration.

The single R740xd we are using is built with the following:

    Dual Intel Xeon Platinum 8180 CPUs
    384GB of DDR4 2667MHz RAM (32GB x 12)
    4 x 400GB SAS SSDs
    2 x 1.6TB NVMe SSDs
    Mellanox ConnectX-4 Lx Dual Port 25GbE DA/SFP rNDC
    LCD bezel with Quick Sync 2 and OpenManage capabilities
    iDRAC 9 Enterprise

Dell EMC PowerEdge R740xd Server specifications:

    Form factor: 2U Rackmount
    Processors: up to 2 Intel Scalable CPUs or up to 28 cores
    Memory: 24x DDR4 RDIMM, LR-DIMM (3TB max)
    NVDIMM support: up to 12 or 192GB
    Drive Bays
        Front bays:
            Up to 24 x 2.5” SAS/SSD/NVMe, max 153TB
            Up to 12 x 3.5” SAS, max 120TB
        Mid bay:
            Up to 4 x 3.5” drives, max 40TB
            Up to 4 x 2.5” SAS/SSD/NVMe, max 25TB
        Rear bays:
            Up to 4 x 2.5” max 25TB
            Up to 2 x 3.5” max 20TB
    Storage Controllers
        Internal controllers: PERC H730p, H740p, HBA330, Software RAID (SWRAID) S140
        Boot Optimized Storage Subsystem: HWRAID 2 x M.2 SSDs 120GB, 240 GB
        External PERC (RAID): H840
        External HBAs (non-RAID): 12 Gbps SAS HBA
    Ports
        Network daughter card options: 4 x 1GE or 2 x 10GE + 2 x 1GE or 4 x 10GE or 2 x 25GE
        Front ports: VGA, 2 x USB 2.0, dedicated IDRAC Direct Micro-USB
        Rear ports: VGA, Serial, 2 x USB 3.0, dedicated iDRAC network port
    Video card:
        VGA
        Up to 8 x Gen3 slots, Up to 4 x16
    GPU Options:
        Nvidia Tesla P100, K80, K40, Grid M60, M10, P4, Quadro P4000.
        AMD S7150, S7150X2
    Supported OS
        Canonical Ubuntu LTS
        Citrix XenServer
        Microsoft Windows Server with Hyper-V
        Red Hat Enterprise Linux
        SUSE Linux Enterprise Server
        VMware ESXi
    Power
        Titanium 750W, Platinum 495W, 750W, 1100W,
        1600W, and 2000W
        48VDC 1100W, 380HVDC 1100W, 240HVDC 750W
        Hot plug power supplies with full redundancy
        Up to 6 hot plug fans with full-redundancy, high-performance fans available

Design and Build

The new PowerEdge Servers have been redesigned not only to make them look flawless (what they really do), but also to reflect how users and applications interact with them. On the other side of the front is the new bezel that has Quick Sync support with its OpenManage wireless capabilities. The same design in the new servers is also combined with the new Dell EMC storage offerings, including systems such as the Unity 450F all-flash array. Below the bezel, there are 24 2.5 "bays that support SATA, SAS, Nearline SAS and NVMe (if configured to do so).


 The front can also be configured to support 12 3.5 "drives, if the maximum capacity is more worrying than performance, on the left side there are health and ID lights, and the iDRAC Quick Sync 2 wireless activation button. On the right side are the power button, the VGA port, the iDRAC Direct micro USB port and two USB 2.0 ports.

Where others in the market are finding ways to reduce costs and eliminate components in favor of cost reduction, an item that Dell EMC has maintained as an option for the R740xd and the R740 is its front panel. Some might be saying "Who cares?" But the small LCD screen and its three buttons are incredibly useful in a data center environment. For example, in a scenario where you can not access iDRAC remotely, the configuration of the management network has changed and you do not want to turn the server on and off to start manually with a cart and keyboard, the front panel is very useful. On a Dell EMC server, you can go through the small interface for the iDRAC configuration, and you can change the IP address to DHCP from static through the front panel. Without that functionality still in place, on many systems you would need to restart it to manually change it. On the R740xd, this is completely out of band through different controls.



 Taking the top cover off shows the inner workings and the massive attention to detail that Dell EMC has put into the new PowerEdge servers. Many of the server components can be swapped out easily if the need arises, and clutter is kept to a minimum to improve airflow. In the system we reviewed, you are able to see the dual-slot m.2 boot SSD card, two RAID cards, as well as two PCIe pass-through adapters for the NVMe slots in front.


Our compilation also includes the internal dual-slot microSD boot device for hypervisor storage. Not so apparent (but very important) is the entire cooling fan duct that keeps the air flow moving through the system throughout the hardware, keeping the access points to a minimum and allowing the server to minimize excessive noise. fan. During the course of our tests, we noticed (or did not notice) any excess fan noise. Under extreme load with saturated CPUs, the noise from the fans remained well below other Whitebox systems in our laboratory. Another interesting element that we found was how the systems handled the air flow at higher ambient air temperatures. In our laboratory, we enjoy the use of fresh air for refrigeration servers, so the systems in our laboratory can see a wide variety of air temperatures. In situations where the R740xd operated in an environment with high ambient air temperatures, it fantastically increased fan speeds, but still kept noise to a minimum. This is in stark contrast to other servers and hardware in our lab that can be heard through closed doors or drown out conversations around you. 

  In both of our configurations, mid-bay storage options were not configured into the build. We pulled out an example shot from the PowerEdge R740xd technical manual which shows the internal 3.5" bays, as well as the 2.5" drive mounts. Few, if any, other mainstream servers offer this high level of density in a system configuration. While there are unique server builds floating on the market, many are custom-built for the application. This makes a world of difference in terms of how unique systems are managed and deployed, as well as who is administering them in the datacenter.


 Flipping around to the rear of the R740xd, customers looking for maximum expansion potential should take note. Starting in the upper left-hand corner, there are three full-height PCIe expansion slots, and beneath them are a system identification button, an iDRAC dedicated networking port, a serial port, VGA port, and two USB 3.0 ports. In the middle are two more full-height PCIe slots, in addition to a half-height slot used for the RAID card on this build. Below those is an rNDC slot which is populated with a dual-port 25Gb Mellanox NIC. On the upper right-hand side are two more full-height PCIe slots above the dual-power supplies. With two full-height PCIe slots to spare, Dell EMC loaded in support for four 2.5" NVMe SSDs, dual-RAID cards, dual m2. boot SSDs, as well as a dual-port 25Gb Ethernet NIC.

The rNDC slot is leveraged for the onboard primary network interface. This can be pre-populated with a number of offerings, ranging from a quad-port 1GbE NIC up to dual-port 25Gb offerings from both Mellanox and Broadcom. None of the options take away from one of the server's available PCIe slots, keeping them completely open for other uses. As we've shown in our rNDC upgrade guide, this bay is easy to upgrade and quite helpful at keeping networking devices out of the main PCIe slots.

Management

The PowerEdge R740xd offers a wide range of management options, including some traditional, as well as others that fit in the palm of your hand. The R740xd can be deployed by leveraging Dell EMC’s OpenManage Mobile app or locally like previous generation servers. The abilities of OpenManage Mobile can really make a difference, especially when you are setting up several servers in one data center, or you just want to get it finished on the floor without going back and forth to your desk or bringing a crash cart. Leveraging pre-built profiles to rapidly deploy a server with nothing more than a cell phone dramatically speeds up a process that frequently requires a crash cart in a datacenter.


 An onboard WiFi radio connects users to the R740xd server, which is clamped down and very secure. You need local and physical access to the server, first to switch on the wireless radio from the front panel of the server, as well as to be able to scan the information tag on the front of the server. Once the network is turned on, you are given access to a private LAN, accessible from your phone or mobile workstation to interface with iDRAC over a mobile app or through a web browser.



 This blends in a mix of handheld access for quick status checks or system polling, or more advanced functionality and iKVM work without worrying about connecting any wires or crash carts. The very short range (5 to 10 feet from the server in a datacenter environment) also helps to minimize the chance of anyone hopping onto the system without noticing. When your work is completed, turning off the wireless radio disables any further access.



 A welcome addition also built into iDRAC is Group Manager, which allows IT admins to manage a group of R740 servers from within iDRAC itself. In our environment, we have the first R740xd acting as the group leader, requiring just one login to remotely manage multiple servers. From a central point you can get server status, as well as power toggle each server and quickly jump into its local iDRAC interface without having to type in additional login information.


 iDRAC has been the heart of Dell management for some time now. Just recently the company announced a slew of enhancements to further improve user experience, as well as the overall functionality of iDRAC. iDRAC9 has added a more powerful processer to quadruple its performance. It now comes with more automation, saving time for IT admins while reducing errors. All BIOS settings can now be adjusted through iDRAC instead of booting to BIOS. The new iDRAC has enhanced storage configurations such as online capacity expansion, RAID level migration, cryptographic physical drive erase, rebuild/cancel rebuild of physical drives, enable revertible hot-spare, and virtual disk renaming.


 When we mention that performance within iDRAC has dramatically improved, it's not overstated in the slightest. The new HTML5 interface is much faster in all areas, including initial login and full interaction through the iDRAC WebGUI. Compared to the R730 (which wasn't a slouch when it came out), it's night and day. As far as some of the new features used directly when logging into iDRAC, the management now has a remote view called Connection View. This can give IT admins a look at various aspects of the server right away. Along with this is a new dashboard for remote managing with iDRAC Group manager. For further direct-connected accessibility, there is now a port for iDRAC directly on the front of the server.


 Additional features have been brought into iDRAC that allow users to better customize each server for its given application. BIOS level customizations can now be set through iDRAC itself, without requiring a console login. This makes it easier to change a few key settings before the initial deployment, all through a simple web browser or an app on your mobile phone. For deploying a number of servers at a time, users can also build up a server profile file to quickly deploy across multiple servers.


 Management of installed hardware also took an interesting path with this latest generation server. Dell made it easier for users to manage PCIe add-on cards, where the server detects the type of card and will automatically adjust the fan speed for proper cooling. Airflow can be further tweaked with a custom LFM fan speed setting per installed device, as well as a master offset adjustment at the server level. Many of the cooling tweaks aren't done to cool installed hardware "better" than previous generation servers; instead, this is more about *perfectly* cooling hardware with the least amount of airflow required. In many servers, you can set the fans to full speed and not worry about over-heating equipment. But this is at the cost of excess power and noise.


Minimized airflow goes a long way towards reducing power consumption through wasted energy spinning fans needlessly high. At the end of the day, this makes the datacenter more enjoyable without fans buzzing at deafening levels.

Performance

When comparing the R740xd to the prior generation systems, compute and storage potential have skyrocketed. With Intel Broadwell updates, the top-spec CPU offered in the R730 series (E5-2699v4) offers 96.8GHz in a dual processor configuration. With the Intel Scalable line inside the PowerEdge R740xd, though, the top-end CPU (Platinum 8180) pushes that number to 139.66GHz. At face value, that's a 44% jump, but it doesn't even start to look at improvements in clockspeed at those higher core counts or DRAM-clockspeed improvements. On the storage side, NVMe SSDs have also taken on a bigger role inside the R740xd configurations, with offerings now topping out at 24 NVMe SSDs, where four used to be the peak on the R730xd.

As we look at the improvements made on the latest generation Dell EMC PowerEdge server, we will touch on both local performance, as well as clustered performance across a group of eight servers leveraging storage from a Dell EMC Unity 450F All-Flash array in a later review. This review layout is geared to help interested buyers see how these servers perform well equipped in single instances, as well as how they interact in a highly virtualized environment within a Dell EMC ecosystem. Bringing together all these systems are Mellanox ConnectX-4 25Gb rNDC NICs, as well as Dell EMC Networking Z9100 100G switches.

In our section looking at local system performance, we have a well-equipped R740xd which we are testing with two different NVMe combos. One is with two Samsung 1.6TB PM1725a NVMe SSDs, while the second is using four Toshiba 1.6TB PX04P NVMe SSDs. With the Intel Platinum 8180 CPUs inside, we had plenty of CPU cycles to throw at our storage workloads, giving us a chance to show the difference moving from two to four NVMe SSDs within the same application workload. Additionally, we also push storage to the brink inside an ESXi 6.5 environment with a multiple-worker vdbench test, with multiple workloads geared at simulating basic four-corners testing up to VDI traces.

Sysbench MySQL Performance

Our first local-storage application benchmark consists of a Percona MySQL OLTP database measured via SysBench. This test measures average TPS (Transactions Per Second), average latency, and average 99th percentile latency as well.

Each Sysbench VM is configured with three vDisks: one for boot (~92GB), one with the pre-built database (~447GB), and the third for the database under test (270GB). From a system resource perspective, we configured each VM with 16 vCPUs, 60GB of DRAM and leveraged the LSI Logic SAS SCSI controller.

Sysbench Testing Configuration (per VM)

    CentOS 6.3 64-bit
    Percona XtraDB 5.5.30-rel30.1
        Database Tables: 100
        Database Size: 10,000,000
        Database Threads: 32
        RAM Buffer: 24GB
    Test Length: 3 hours
        2 hours preconditioning 32 threads
        1 hour 32 threads

We compared the performance of two Sysbench runs on the PowerEdge R740xd, one with 4VMs hosted on two NVMe SSDs and another with 4VMs with a dedicated NVMe SSD for each VM. In both of these tests, the CPU load wasn't brought to the breaking point of 100%. We saw a split of about 60% and 80% CPU utilization for the two benchmarks, meaning there was still room to grow with additional VMs and more DRAM. The first with two NVMe SSDs hosting the Sysbench VMs, aggregate TPS came to 11,027, and in the second test with four NVMe SSDs, the aggregate TPS increased to 13,224. This contrasts to a measurement of 10,683TPS from the PowerEdge R630 we benchmarked about a year ago with E5-2699v4 CPUs and four NVMe SSDs as well.


Looking at average latency in our Sysbench workload, the 2 NVMe SSDs result came in at 11.61ms, whereas the 4 NVMe SSDs result came in at 9.69ms


In our worst-case 99th percentile latency measurement, 2 NVMe SSDs measured 24.5ms, while 4 NVMe SSDs came in at a very stable 20.7ms.


SQL Server Performance

StorageReview’s Microsoft SQL Server OLTP testing protocol employs the current draft of the Transaction Processing Performance Council’s Benchmark C (TPC-C), an online transaction processing benchmark that simulates the activities found in complex application environments. The TPC-C benchmark comes closer than synthetic performance benchmarks to gauging the performance strengths and bottlenecks of storage infrastructure in database environments.

Each SQL Server VM is configured with two vDisks: 100GB volume for boot and a 500GB volume for the database and log files. From a system resource perspective, we configured each VM with 16 vCPUs, 64GB of DRAM and leveraged the LSI Logic SAS SCSI controller. While our Sysbench workloads tested previously saturated the platform in both storage I/O and capacity, the SQL test looks for latency performance.

This test uses SQL Server 2014 running on Windows Server 2012 R2 guest VMs, and is stressed by Dell's Benchmark Factory for Databases. While our traditional usage of this benchmark has been to test large 3,000-scale databases on local or shared storage, in this iteration we focus on spreading out four 1,500-scale databases evenly across our servers.

SQL Server Testing Configuration (per VM)


    Windows Server 2012 R2
    Storage Footprint: 600GB allocated, 500GB used
    SQL Server 2014
        Database Size: 1,500 scale
        Virtual Client Load: 15,000
        RAM Buffer: 48GB
    Test Length: 3 hours
        2.5 hours preconditioning
        30 minutes sample period

Similar to how we ran our Sysbench benchmark, we tested configurations with 2 NVMe SSDs as well as 4 NVMe SSDs. With 4VMs spread out over 2 drives, we saw aggregate TPS in Benchmark Factory measure 12,631, whereas with 4 NVMe SSDs, this measured 12,625. While this is a bit counter-intuitive, with our particular configuration of the benchmark, latency measured below shows the real story.

With 2 NVMe SSDs, we saw average latency measure 6.5ms across our four SQL Server workloads, while with 4 NVMe SSDs, that number dropped to just 4ms. In both tests' performance, the server used just 20 and 22% of the CPU in the process. The PowerEdge R740xd with dual Intel 8180 CPUs has an immense amount of compute and storage potential to throw at these types of database workloads without breaking a sweat.


 VDBench Workload Analysis

Our last section of local performance testing focuses in on synthetic-workload performance. In this area, we leveraged four NVMe SSDs in VMware ESXi 6.5, and evenly spread out 16 worker VMs, each with two 125GB vmdks mounted to measure a storage footprint of 4TB. This type of test is useful to show what real-world storage metrics look like with the overhead associated with a virtualized environment.

When it comes to benchmarking storage arrays, application testing is best, and synthetic testing comes in second place. While not a perfect representation of actual workloads, synthetic tests do help to baseline storage devices with a repeatability factor that makes it easy to do apples-to-apples comparison between competing solutions. These workloads offer a range of different testing profiles ranging from "four corners" tests, common database transfer size tests, as well as trace captures from different VDI environments. All of these tests leverage the common vdBench workload generator, with a scripting engine to automate and capture results over a large compute testing cluster. This allows us to repeat the same workloads across a wide range of storage devices, including flash arrays and individual storage devices. On the array side, we use our cluster of Dell PowerEdge R730 servers:

Profiles:

    4K Random Read: 100% Read, 128 threads, 0-120% iorate
    4K Random Write: 100% Write, 64 threads, 0-120% iorate
    64K Sequential Read: 100% Read, 16 threads, 0-120% iorate
    64K Sequential Write: 100% Write, 8 threads, 0-120% iorate
    Synthetic Database: SQL and Oracle
    VDI Full Clone and Linked Clone Traces

Looking at peak read performance, the Dell EMC PowerEdge R740xd offered sub-millisecond latency 4K read performance up to just over of 800K IOPS, starting at 0.21ms. At its peak, the R740xd measured 978k IOPS at a latency of 3.8ms.


 Looking at 4K peak write performance, the R740xd started off with a latency of 0.12ms and stayed below 1ms until it hit around 730K IOPS. At its peak, the R740xd hit over 834K IOPS at 2.4ms



 Switching to 64K peak read, the R740xd started off at 0.27ms latency and stayed below 1ms until it hit around 150K IOPS. It peaked just over 170K IOPS with 3ms of latency. The R740xd finished with a bandwidth of 10.644GB/s.  



For 64K sequential peak write, the R740xd started off at 0.14ms and stayed under 1ms until it hit just over 65K IOPS. The R740xd hit its peak at 93K IOPS with a latency of 2.7ms. The R740xd also had a bandwidth of 5.83GB/s at its peak.


 In our SQL workload, the R740xd started its latency at 0.21ms and stayed under 1ms until it hit between 700K and 750K IOPS. It peaked at 760K IOPS and only 1.29ms.


In the SQL 90-10 benchmark, the R740xd started with a latency of 0.2ms and stayed under 1ms until just under 600K IOPS. The R740xd peaked over 634K IOPS with 1.57ms latency.


 The SQL 80-20 saw the R740xd start with a latency of 0.2ms and stay under 1ms until it moved over 481K IOPS. The R740xd peaked nearly at 538K IOPS with 1.7ms latency.


 With the Oracle Workload, the R740xd started with a latency of 0.2ms and stayed under 1ms until just over 400K IOPS. The R740xd peaked at 470K IOPS with a latency of 2.5ms.

 With the Oracle 90-10, the R740xd started off at a latency of 0.2ms and stayed under 1ms the entire benchmark. It peaked at 636K IOPS with a latency of 0.98ms.


 With the Oracle 80-20, the R740xd started off at a latency of 0.2ms and stayed under 1ms until it was just under 529K IOPS. It peaked at 533K IOPS with a latency of 1.14ms.


 Switching over to VDI Full Clone, the boot test showed the R740xd starting with a latency of 0.21ms and staying under 1ms until around 490K IOPS. The R740xd peaked at 539K IOPS with a latency of 1.9ms.


 The VDI Full Clone initial login started off at 0.17ms latency and stayed under 1ms until around 175K IOPS. The R740xd peaked at 218K IOPS with a latency of 4.1ms.


 The VDI Full Clone Monday login started off at 0.2ms latency, staying under 1ms until over 180K IOPS. It peaked at 215K IOPS with 2.36ms.


 Moving over to VDI Linked Clone, the boot test showed performance staying under 1ms up to roughly 350k IOPS, and later topping out at a peak of 376K IOPS with an average latency of 1.36ms.


 In the Linked Clone VDI profile measuring Initial Login performance, we saw sub-ms latency up till around 130k IOPS, where it further increased to 154K IOPS at 1.64ms at its peak.


 In our last profile looking at VDI Linked Clone Monday Login performance, we see the 1ms barrier transition happening at around 109K IOPS, where the workload continued to increase to its peak at 151K IOPS and 3.36ms average latency.



conclusion

The new Dell EMC PowerEdge R740xd is the "extreme disk" version of the R740. Within its 2U space, it can hold up to 32 2.5 "drives, including up to 24 NVMe drives.This server can help unlock the potential of all that high-performance storage by taking up to two scalable Intel processors and up to 3 TB of memory Dell EMC not only stopped with hardware upgrades The new server comes with integrated SDS support, making it ideal for use cases that need to take advantage of HCI The server is modular and highly configurable to satisfy almost all the needs of the customers.

In our application performance benchmarks, we tested one Dell EMC PowerEdge R740xd with 4VM hosted on two NVMe SSDs, and tested another with 4VM with a dedicated NVMe SSD for each VM. For Sysbench, test 4 NVMe had a score of 13,224 TPS, average latency of 10 ms and scenario latency of worst scenario of 21ms, while comparative test 2 NVMe had 11,028 TPS, average latency of 12ms and worst case latency 24ms stage For our SQL Server test, the 4 NVMe tests achieved an aggregate TPS score of 12.625 and an aggregate latency of 4 ms. Test 2 NVMe produced an aggregate TPS score of 12,631.8 and an aggregate latency of 6.5ms.

In our VDBench workload analysis, the R740xd really shone in a virtualized ESXi 6.5 environment. In our 4K randomized synthetic test, we saw less than millisecond performance in reading up to 800,000 IOPS, and in writing up to 730,000 IOPS. In the 64K sequential read, the R740xd had a latency of less than milliseconds to 150,000 IOPS and ended with a bandwidth of 10,644 GB / s. To write 64K, the server had a performance of less than milliseconds up to 65,000 IOPS and a bandwidth of 5.83GB / s. In our SQL workload, we saw again a solid performance of less than milliseconds (up to 700,000 IOPS, 600,000 IOPS and 481,000 IOPS for workload, 90-10, 80-20, respectively), but the most impressive thing was that the performance exceeded with latency between 1.29ms and 1.7ms with a performance well in excess of 500,000 IOPS in each. Oracle's workload also showed a solid performance of less than milliseconds with the 90-10 running the full benchmark in less than 1 ms, reaching a maximum of 636,000 IOPS. The R740xd reached a maximum of 539,000 IOPS, 218,000 IOPS and 215,000 IOPS in full clone (with a maximum latency of 1.9ms, 4.1ms and 2.36ms). And in our linked cloning reference points, the server reached a maximum of 376,000 IOPS, 154,000 IOPS and 151,000 IOPS (with a maximum latency of 1.36ms, 1.64ms and 3.36ms).

Dell EMC is clearly excited about the launch of the new line of servers and specifically with the R740xd, the centerpiece of the PowerEdge line. We have logged many weeks with the new systems, and thirteen R740xds make up the central backbone of our test lab. From the work we have done, the servers have impressed everywhere, from the management capacity through iDrac and OpenManaage Mobile to the performance with the NVMe bays. With all the additional flexibility that the x7 flavor of the R740 offers, it's no surprise that Dell EMC is using it as the backbone of several of its SDS offerings, including Ready vSAN nodes, ScaleIO ready nodes, direct storage nodes, VxRail and the XC740xd (Nutanix) for example. In total, the PowerEdge R740xd is the most complete server offering we've seen to date in terms of build quality, system design, storage flexibility, performance and manageability, which makes it a clear leader in the space and our first Editor's Choice in the server category.



Thursday 7 September 2017

EMC E20-020 Question Answer

During the assessment phase of the design process, the cloud architect discovers that an organization wants to provide consumers with the ability to backup and restore entire virtual machines. Which backup application functionality will support this requirement?

A. Agent-based backups
B. Image-based backups
C. Array-based snapshots
D. Cloud gateway backups

Answer: B


A cloud architect is expected to present a final cloud infrastructure design to an organization. The architect will meet with an organization’s executives and discuss the physical design. Why might this to be the wrong time to present this design?


A. Physical designs are too high level for executives
B. Physical designs are too general for executives
C. Physical designs are too detailed for executives
D. Physical designs are too theoretical for executives

Answer: C