Wednesday, July 8, 2015

IT News Head Lines (AnandTech) 7/9/2015

AnandTech



The MSI HQ Tour: Design 101
This year marked my fifth year at Computex, starting from 2011. Out of the trade shows I attend each year, it ranks as number one for a variety of reasons – the location (Taiwan) is a great country to visit and experience, most of the PC component companies I deal with on a day-to-day basis are there, and every year there tends to be a number of big launches either just before, during, or immediately after. Each year of my Computex visits, we have synchronized with a company in order to obtain both a tour of the headquarters as well as a high profile interview. This year we had a chance to visit MSI, based in the Zhonghe district of New Taipei City.


Read More ...




Samsung Launches New 2TB SSD 850 EVO And 850 PRO Models
Due to what Samsung is citing as a surge in demand for larger capacity SSDs, they have now launched two new models offering up to two terabytes of storage each. In order to drive the extra capacity, they have also launched a new SSD controller in the MHX controller. Our resident SSD expert Kristian expects the MHX to be similar in design to the MEX controller, but with additional DRAM to track the extra blocks.

The 2TB 850 EVO leverages the same 32-layer 128 Gbit TLC V-NAND that we have already seen in the smaller capacity 850 EVO products, but the 850 PRO will use a new 128 Gbit 2-bit MLC die, but still at 32-layers. It should be a nice addition to the 850 PRO series, especially with the rise of 4K video and the extra storage it requires.

Samsung 2TB SSD Specifications
Model 850 PRO 850 EVO
Controller Samsung MHX
NAND Samsung 128Gbit 40nm MLC V-NAND 32-layers Samsung 128Gbit 40nm TLC V-NAND 32-layers
DRAM (LPDDR3) 2GB
Sequential Read 550MB/s 540MB/s
Sequential Write 520MB/s 520MB/s
4KB Random Read 100K IOPS 98K IOPS
4KB Random Write 90K IOPS 90K IOPS
Power 5mW (DevSLP) / 3.3W (read) / 3.4W (write) 5mW (DevSLP) / 3.7W (read) / 4.7W (write)
Encryption AES-256, TCG Opal 2.0 & IEEE-1667 (eDrive supported)
Endurance 300TB 150TB
Warranty 10 years 5 years
Price $1000 $800

Samsung will still package these drives in the same 7mm 2.5” SSD enclosure which means they will be SATA based for now, but Samsung has said they will be moving their 3D NAND to mSATA and M.2 form factors as well. Endurance ratings for the drives are 10 years or 300 TBW (Terabytes Written) for the PRO, and 5 years or 150 TBW for the EVO model.

The 850 Pro retails for $1000, and the 850 EVO retails for $800. Although not inexpensive by any means, and still much more than the $75 of a spinning disk, the prices are right around double the 1TB models in the lineup so there is not any extra premium to get the larger models at this time.

Kristian should have a full review of the new models soon.




Read More ...




NVIDIA @ ICML 2015: CUDA 7.5, cuDNN 3, & DIGITS 2 Announced
Taking place this week in Lille, France is the 2015 International Conference on Machine Learning, or ICML. Now in its 32nd year, the annual event is one of the major international conferences focusing on machine learning. Coinciding with this conference are a number of machine learning announcements, and with NVIDIA now heavily investing in machine learning as part of their 2015 Maxwell and Tegra X1 initiatives with a specific focus on deep neural networks, NVIDIA is at the show this year to release some announcements of their own.

All-told, NVIDIA is announcing new releases for three of their major software libraries/environments, CUDA, cuDNN, and DIGITS. While NVIDIA is primarily in the business of selling hardware, the company has for some time now focused on the larger GPU compute ecosystem as a whole as a key to their success. Putting together useful and important libraries for developers helps to make GPU development easier and to attract developer interest from other platforms. Today’s announcements in turn are Maxwell and FP16-centric, with NVIDIA laying the groundwork for neural networks and other half-precision compute tasks which the company believes will be important going forward. Though the company only has a single product so far that has a higher performance FP16 mode – Tegra X1 – it has more than subtly been hinted at that the next-generation Pascal GPUs will incorporate similar functionality, making for all the more reason for NVIDIA to get the software out in advance.

CUDA 7.5


Starting things off we have CUDA 7.5, which is now available as a release candidate. The latest update for NVIDIA’s GPU compute platform is a smaller release as one would expect for a half-version update, and is primarily focused on laying the API groundwork for FP16. To that end CUDA 7.5 introduces proper support for FP16 data, and while non-Tegra GPUs still don’t receive a compute performance benefit from using FP16 data, they do benefit from reduced memory pressure. So for the moment NVIDIA is enabling this feature for developers to take advantage of any performance benefits from the reduced memory bandwidth needs and/or allowing for larger datasets in the same amount of GPU memory.

Meanwhile CUDA 7.5 is also introducing new instruction level profiling support. NVIDIA’s existing profiling tools (e.g. Visual Profiler) already go fairly deep, but now the company is looking to go one step further in helping developers identify specific code segments and instructions that may be holding back performance.

cuDNN 3


NVIDIA’s second software announcement of the day is the latest version of the CUDA Deep Neural Network library (cuDNN), NVIDIA’s collection of GPU accelerated neural networking functions, which is now up to version 3. Going hand-in-hand with CUDA 7.5, a big focus on cuDNN 3 is support for FP16 data formats for existing NVIDIA GPUs in order to allow for more efficient memory and memory bandwidth utilization, and ultimately larger data sets.


Meanwhile separate from NVIDIA’s FP16 optimizations, cuDNN 3 also includes some optimized routines for Maxwell GPUs to speed up overall performance. NVIDA is telling us that FFT convolutions and 2D convolutions have both been added as optimized functions here, and that they are touting an up to 2x increase in neural network training performance on Maxwell GPUs.

DIGITS 2


Finally, built on top of CUDA and cuDNN is DIGITS, NVIDIA’s middleware for deep learning GPU training. First introduced just back in March at the 2015 GPU Technology Conference, NVIDIA is rapidly iterating on the software with version 2 of the package. DIGITS, in a nutshell, is NVIDIA’s higher-level neural network software for general scientists and researchers (as opposed to programmers), offering a more complete neural network training system for those users who may not be accomplished computer programmers or neural network researchers.



NVIDIA® DIGITS™ Deep Learning GPU Training System

DIGITS 2 in turn introduces support for training neural networks over multiple GPUs, going hand-in-hand with NVIDIA’s previously announced DIGITS DevBox (which is built from 4 GTX Titan Xs). All things considered the performance gains from using multiple GPUs are not all that spectacular – NVIDIA is touting just a 2x performance increase in going from 1 to 4 GPUs – though for performance-bound training this none the less helps. Looking at NVIDIA’s own data, it looks like scaling from 1 to 2 GPUs is rather good, but scaling from 2 to 4 GPUs is where the performance gains from scaling slow down, presumably due to a combination of bus traffic and synchronization issues over a larger number of GPUs. Though on that note, it does make me curious whether the Pascal GPUs and their NVLink buses will improving multi-GPU scaling at all in this scenario.


In any case, the preview release of DIGITS 2 is now available from NVIDIA, though the company has not stated when a final version will be made available.


Read More ...




Samsung Adds 2 TB 850 EVO, PRO SSDs for $800, $1000
Drives are priced a half to a third as much per GB as rivals' high capacity drives, lead pack in warranty, performance to boot

Read More ...




Mozilla Promise Punctual Windows 10 Firefox Release, Teases at iOS Arrival
Long delay for Windows 8.x version will not be replicated, work is already well on its way

Read More ...






Available Tags:MSI , Samsung , SSD , NVIDIA , CUDA , Mozilla , Windows , Firefox , iOS

No comments: