
NVIDIA to License Kepler and Future GPU IP to 3rd Parties
Earlier today NVIDIA announced that it would begin licensing its Kepler GPU architecture to 3rd parties. This is a sensible next step for NVIDIA, but an unprecedented one among the two remaining discrete PC GPU suppliers.
Note that what NVIDIA is announcing today is contrary to AMD’s semi-custom approach to SoC production. AMD is offering to build (semi) custom tailored silicon to customer needs, while NVIDIA is taking a more ARM-like approach and offering its GPU IP to 3rd parties for integration on their own. In other words, NVIDIA is looking to compete with ARM and Imagination Technologies rather than AMD or Qualcomm.
In addition to its GPU architecture, NVIDIA is now also open to licensing its visual computing patents to 3rd parties. The visual computing patent portfolio includes all of NVIDIA’s 5500 patents in the area, as well as CUDA.
NVIDIA views its IP licensing business as additive rather than in lieu of its current GPU and SoC businesses. Time will tell whether or not this ends up being the case, but it’s quite obvious that at NVIDIA’s current size it wouldn’t be able to go after all GPU markets on its own - enabling others to do so makes a lot of sense from that perspective.
It Doesn’t End with Kepler: Future NVIDIA GPUs to Be Licensed

I
asked NVIDIA about future GPU architectures beyond Kepler, and the
answer was pretty awesome: future GPU architectures will be available to
licensees at the time of tape out by NVIDIA. Licensees can choose
whether or not to adopt an architecture right away or wait for any
potential revisions, similar to what ARM does with its cores (e.g. Tegra
4i uses a later revision of the Cortex A9 core). This move has huge
implications. Theoretically a licensee could bring an NVIDIA GPU to
market before NVIDIA itself, although that does seem pretty unlikely.
What we could see however is a licensee introduce a GPU configuration
that NVIDIA had no intentions of bringing to market.The model makes a lot of sense and expands NVIDIA’s role in the computing world beyond its life in PCs. In the PC space, NVIDIA built discrete GPUs that system integrators (and end users) put in their machines. In the post-discrete world where SoCs rule the landscape, NVIDIA believes it can be just as relevant by doing the same. The difference here is instead of NVIDIA building cards out of its GPUs and selling them, the SoC manufacturer would be responsible for all integration. It’s the same playbook, just modified to deal with the new world around it.
Targets for Integration

NVIDIA
is quick to point out that Kepler is its first mobile to high-end GPU
architecture. Capable of scaling from smartphones (Logan/Tegra 5 next
year) to supercomputers (Titan), Kepler is inherently very flexible and
makes a lot of sense as NVIDIA’s first target for its IP licensing
program.Although mobile is an obvious fit for Kepler licensing, NVIDIA hopes its GPUs will be used in new markets as well. NVIDIA’s refrain sounds quite similar to AMD’s. Neither knows where the next big market will be, but both want to be prepared for it when the time comes. It won’t be too long before smartphones and tablets reach their own Ultrabook moments when performance becomes good enough for the majority of the market and attention shifts elsewhere. When that happens, NVIDIA (and AMD, and others) believe that opportunities for continued growth will appear in new markets (e.g. TVs, wearables, other connected compute devices).
The Compute Connection
Although it’s clear that the greatest point of interest with today’s announcement revolves around getting NVIDIA’s GPU designs and graphics IP into new products, at a high level perspective NVIDIA has made it clear they’re licensing their visual computing technology, and that this isn’t just a play for graphics. As part of keeping themselves open to new markets, NVIDIA has told us that they’re essentially willing to do whatever makes financial sense as far as licensing goes, with both compute and graphics on the table. So at the same time as licensing out their graphics technology, NVIDIA has also opened the door to licensing out CUDA and their other GPU compute innovations if the price is right.This can lead to several possibilities, ultimately relying on who’s interested and what market they represent. At a most basic level, licensing an NVIDIA GPU will get the buyer CUDA – binary compatibility and all – thanks to the fact that this would be the same hardware CUDA already runs on. However in the new “anything is possible” licensing system of NVIDIA, CUDA could also be licensed out separately. Device makers who simply want to add CUDA support to their devices, either to take advantage of some of the runtime’s unique functionality or merely to enable easier porting from existing NVIDIA systems, can now license the necessary CUDA IP from NVIDIA. The GPU computing market is still very young with a number of competing technologies, but thus far based on actual usage CUDA has proven to be a front runner compared to more widely supported (and open) environments such as OpenCL, so while NVIDIA is still trying to bring further users onto CUDA, they also have a CUDA user base they can leverage today.
The most obvious avenue for any potential CUDA licensing would be HPC users looking for greater integration beyond today’s CPU + GPU setups we see in systems like Titan. However NVIDIA is also pursuing this with forthcoming SoCs like Logan and further products integrating their Denver CPU, so it’s not a market that’s being ignored by NVIDIA. On the other hand more novel uses of GPU compute in the embedded space, encompassing everything from TVs to automotive to traditional appliances, are areas that have been identified as potential growth avenues for GPU computing by NVIDIA and other GPU firms in the past, not all of which NVIDIA is directly serving right now. In all of these cases licensing can focus on CUDA, or even more broadly just licensing specific NVIDIA compute technologies that would be useful to include in these products; even obscure technologies like Kepler’s low-overhead soft-ECC implementation could potentially be of value as a licensed technology.
NVIDIA Can Now Go After Apple & Samsung Business
The cynic in all of us can point to NVIDIA’s struggles with getting Tegra 4 into devices and out the door as motivation behind wanting to license its GPU IP. Beating Qualcomm has proven to be very difficult. Even Intel has had a wonderfully difficult time of making its way into the mobile space. So is that what this licensing play is all about? To an extent, perhaps.Had Tegra 4 been out and available, I think it’s safe to say that the SoC would likely have been used in at least some previous Tegra 3 design wins. Tegra 4i will hope to do the same for smartphones. I see no reason for these businesses to stop, but I think it’s quite obvious that there’s a huge gap between where the Tegra business is today and where Qualcomm is.
By licensing its GPU IP, NVIDIA opens itself up to additional customers (and revenue) that otherwise wouldn’t have considered it. I doubt Apple would ever use an off-the-shelf Tegra SoC, but NVIDIA can now compete for Apple SoC business alongside Imagination Technologies. Should Apple decide to one day drop Intel altogether and bring all of its CPU design in house, it now has a GPU vendor it can license cores or technologies from - just like it does with ARM. The exact same goes for Samsung.
Both Apple and Samsung have histories of licensing GPU IP from Imagination. NVIDIA now has a chance of going after that business.
The same could be said at the other end of the spectrum. The mobile SoC wars we saw unfold over the past few years are about to heat up in the server market. Where integration of high performance GPU architectures makes sense in servers, NVIDIA now has an offering to those that are interested.
GPUs Today, LTE Tomorrow?

NVIDIA
isn’t officially announcing plans to license its Icera modem IP, but
I’m told that’s the next logical step. NVIDIA is investing handsomely in
Tegra 4i and its modem architectures, but similar to its GPU business -
in order to address a much larger market, it will have to consider
licensing that IP.Final Words
Although unexpected from a timing perspective (we had no hint that NVIDIA was going to drop this on us today), NVIDIA’s move to license its GPU IP is very sensible. All growth markets where compute is concerned are moving forward with high levels of integration. For NVIDIA to not only remain relevant in the broader world but also grow with it, it must have a strategy in place for markets where integration is required.Where those new markets are, and ultimately what this means for NVIDIA’s financials is beyond the scope of our analysis - it’s simply the right (only?) move.
Read More ...
Snapdragon 800 (MSM8974) Performance Preview: Qualcomm Mobile Development Tablet Tested
We’ve written about Snapdragon 800 (MSM8974) before, for those unfamiliar, this is Qualcomm’s new flagship SoC with four Krait 400 CPUs at up to 2.3 GHz, Adreno 330 graphics, and the latest modem IP block with Category 4 LTE. Qualcomm is finally ready to show off MSM8974 performance on final silicon and board support software, and invited us and a few other publications out to San Francisco for a day of benchmarking and poking around. We looked at MSM8974 on both the familiar MSM8974 MDP/T, a development tablet used both by Qualcomm and 3rd parties to develop drivers and platform support, and the MSM8974 MDP phone, both of which have been publicly announced for some time now.
The tablet MDP is what you’d expect, an engineering platform designed for Qualcomm and other third parties to use while developing software support for features. Subjectively it’s thinner and more svelte than the APQ8064 MDP/T we saw last year, but as always OEMs will have the final control over industrial design and what features they choose to expose. Display is 1080p on the tablet and 720p on the phone, a bit low considering the resolutions handset and tablet markers are going for (at least 1080p on phone and WQXGA on tablets) so keep that in mind when looking at on-screen results from benchmarks. Read on for our full Snapdragon 800 performance preview.
Read More ...
Building a mini-ITX Haswell System with ASUS [video]
For our final installment, JJ put together a bunch of components for a mini-ITX Haswell build and took us through his build process. The motherboard itself is a Z87-I Deluxe, an upcoming mini-ITX Z87 board from ASUS. Also in the video you'll see JJ install ASUS' mini-ITX optimized GeForce GTX 670 DC Mini card. Finally, the chassis is pretty cool - it's the Lian Li PC-Q30.
Read More ...
Ask AnandTech: Tablets at Work, How Important is Backwards Compatibility?
Last week you guys did an awesome job with the discussion around the role of tablets in the workplace. There are a good number of you who have already embraced tablets for work, or who at least see the potential for the form factor at work if other hardware requirements are met. Now comes the next level, and honestly a question that I'm asked quite often when meeting with manufacturers. As far as work tablets are concerned, how important is backwards compatibility with existing x86/Windows applications?
The question obviously lends itself to a Windows 8 vs. Windows RT debate, but it's actually even bigger than that. We're really talking about Windows 8 vs. Windows RT or Android or iOS in the workplace.
While the previous question could definitely influence future design decisions, your answers here help answer more fundamental questions of what OSes to support for OEMs looking to play in the enterprise/business tablet space.
Respond in the comments below!
Read More ...
NVIDIA @ ISC 2013: CUDA 5.5 Released & More
As the 2013 International Supercomputing Conference continues this week, product and technology announcements continue to trickle out of the show. NVIDIA of course is no stranger to this show, and coming off their success with Titan last year are ever increasing their presence to try to capture a larger share of the lucrative HPC market. To that end NVIDIA is releasing several announcements this morning that we wanted to briefly cover.
The big news out of ISC 2013 for NVIDIA is that CUDA 5.5 is now out of private beta and onto its public release candidate. Though CUDA 5.5 is just a point release for CUDA, it does bring several significant changes for developers. The biggest change of course is that this is the first version of CUDA to offer ARM support, going hand in hand with the launch of the Kayla development platform and ahead of next year’s launch of NVIDIA’s Logan SoC.
CUDA on ARM is a significant point of interest for NVIDIA for a couple of reasons. On the consumer side of things NVIDIA is hoping to ultimately leverage CUDA for compute on SoC based devices, similar to what they have done in the PC space over the last half-decade. For the ISC crowd however the focus is on what this means for NVIDIA’s HPC ambitions, as an ARM based HPC environment is something that can be powered exclusively by NVIDIA processors, as opposed to today’s common scenario of pairing Tesla compute cards with x86 AMD and Intel processors. Though if nothing else, in the more immediate future this is NVIDIA ensuring they aren’t left behind by the continuing growth of ARM device sales.
Along with bringing ARM support to CUDA, CUDA 5.5 also introduces cross compilation support to the toolkit, allowing ARM binaries to be built either natively on ARM systems or much more quickly on faster x86 systems. Other changes include several different improvements in MPI and HyperQ, such as MPI workload prioritization and HyperQ gaining the ability to receive jobs from multiple MPI processes on Linux systems.
Finally, on a broader view CUDA 5.5 will also be bringing some small but important changes that most developers will see in one way or another. On the development side of things NVIDIA is rolling out a new guided performance analysis tool for use with their Visual Profiler tool and with the Nsight Eclipse Edition IDE in order to help developers better identify and resolve performance bottlenecks. Meanwhile on the deployment side of things NVIDIA is finally rolling out a static compilation option, which should simplify the distribution of CUDA appliations, allowing the necessary CUDA libraries to be statically linked in applications, rather than relying solely on dynamic linking and requiring that the necessary libraries be bundled with the application or the CUDA toolkit installed on the target computer.
Moving on, along with the CUDA 5.5 announcements NVIDIA is also using ISC to showcase some of the latest projects being developed with NVIDIA’s GPUs. NVIDIA’s major theme for ISC is neural net computing, with a pair of announcements relating to that.
On the academic front, Stanford has put together a new cluster to model neural networking for researching how the human brain learns. The 16 server cluster is capable of modeling a 11.2 billion parameter neutral network, which is 6.5 times bigger than the second largest such network, a 1.7 billion parameter model put together by Google in 2012. The cluster is the basis of a separate paper being released this week for the International Conference on Machine Learning, which is simultaneously taking place this week in Atlanta.
Meanwhile on the business front, Nuance, the company behind speech recognition software such as the Dragon series, is being tapped for ISC as a neural network case study. Nuance has used neural networking techniques for years as the basis of the machine learning systems their software uses to train itself, and more recently the company has begun integrating GPUs into that work. Specifically, the company is now using NVIDIA’s GPUs to accelerate the training process, cutting down the amount of time needed to train a model from weeks to days, and in turn allowing the company to experiment with new many more models in the same period of time. The ultimate result being that the company can test and refine models more frequently, with these refined models becoming the basis of future products.
Finally, although Titan is no longer the #1 supercomputer in the world – having been bumped down to merely #2 on the latest Top500 list – word comes from NVIDIA that Titan has finally passed all of its necessary acceptance tests. As is typical for supercomputers, they are unveiled and listed before undergoing full acceptance testing, which means final acceptance may not come until months later. In the case of Titan some unexpected issues were discovered with the PCIe connectors on its motherboards, with excess gold in the connectors leading to solder issues. The issue was repaired in April, and Titan was resubmitted for an acceptance testing pass, which as of last week it has since passed and finally entered full production.
Read More ...
AMD Evolving Fast to Survive in the Server Market Jungle
There are two important trends in the server market: it is growing and it is evolving fast. It is growing fast as the number of client devices is exploding. Only one third of the world population has access to the internet, but the number of internet users is increasing by 8 to 12% each year. Most of the processing happens now on the server side (“in the cloud”), so the server market is evolving fast as the more efficient an enterprise can deliver IT services to all those smartphones, tablets and PCs, the higher the profit margins and the survival chances.
And that is why there is so much interest in the new star, the “micro server”. Today, AMD has laid out some ambitious plans for this part of the server market.
Read More ...
UPDATED! AMD Announces FX-9590 and FX-9370: Return of the GHz Race
AMD Announces FX-9590 and FX-9370: Return of the GHz Race
Today at E3 AMD announced their latest CPUs, the FX-9590 and FX-9370. Similar to what we’re seeing with Richland vs. Trinity, AMD is incrementing the series number to 9000 while sticking with the existing Piledriver Vishera architecture. These chips are the result of tuning and binning on GlobalFoundries’ 32nm SOI process, though the latest jump from the existing FX-8350 is nonetheless quite impressive.
The FX-8350 had a base clock of 4.0GHz with a maximum Turbo Core clock of 4.2GHz; the FX-9590 in contrast has a maximum Turbo clock of 5GHz and the FX-9370 tops out at 4.7GHz. We’ve asked AMD for details on the base clocks for the new parts, but so far have not yet received a response; we're also missing details on TDP, cache size, etc. but those will likely be the same as the FX-8350/8320 (at least for everything but TDP).
6/13/2013 Update: We have now received the most important pieces of information from AMD regarding the new parts. The base clock on the FX-9590 will be 4.7GHz and the base clock of the FX-9370 will be 4.4GHz, so in both cases it will be 300MHz below the maximum Turbo Core speed. The more critical factor is also the more alarming aspect: the rumors of a 220W TDP have proven true. That explains why these parts will target system integrators first, and the FX-9000 series also earns the distinction of having a higher TDP, but it also raises some serious concerns. With proper cooling, there's little doubt that you can run a Vishera core at 5.0GHz for extended periods of time, but 220W is a massive amount of power to draw for just a CPU.
To put things in perspective, the highest TDP part ever released by AMD prior to the FX-9000 series is the 140W TDP Phenom II X4 965 BE. For Intel, the vast majority of their chips have been under 130W, but a few chips (e.g. Core 2 Extreme QX9775, Core i7-3970X, and most of the Xeon 7100 series PPGA604 parts back at the end of the NetBurst era) managed to go above and beyond and hit 150W TDPs. So we're basically looking at a 76% increase in TDP relative to the FX-8350 to get a 19% increase in maximum clock speed. It's difficult to imagine the target market for such a chip, but perhaps a few of the system integrators expressed interest in a manufacturer-overclocked CPU.
For those who remember the halcyon days of the NetBurst vs. Sledgehammer Wars, the irony of AMD pimping the “first commercially available 5GHz CPU” can be a bit hard to take. Yes, all other things being equal (cache sizes, latency, pipeline depth, power use, etc., etc…), having a higher core clock will result in better performance. The stark reality is that all other things are almost never “equal”, however, which means pushing clocks to 5GHz will improve performance over the existing FX-8000 parts but clock speed alone isn’t enough. AMD continues to work on their next generation architecture, Steamroller, which will debut later this year in the Kaveri APUs as a 28nm part, but in the interim we have to make do with the existing parts.
As we covered extensively last week, Intel has just launched their latest Haswell processors, and on the desktop we’re seeing relatively small performance gains. That’s somewhat interesting as this is a “Tock” in Intel’s Tick-Tock cadence, which means a new architecture and that usually means improved performance. However, similar to the last Tock (Sandy Bridge), Haswell is more of a mobile-focused architecture, which means performance gains on the CPU are minor but power and battery life gains can be significant, especially in lighter workloads. Also similar to the “Tock” when we moved from Clarkdale to Sandy Bridge, the jump in graphics performance with the HD 5000 series parts (and even more so with the Iris and Iris Pro parts) can be quite large relative to Ivy Bridge.
So Intel has been relatively tame on the CPU performance increases this time around and for they’ve focused on reducing typical power use and improving graphics. Meanwhile AMD’s answer on their high-end desktop platforms is…more clock speed. We’ll have full reviews of the new parts in the future, as the new CPUs are not yet available, but given the ability of Vishera to overclock quite easily to the 4.8-5.2GHz range on air-cooling (and 8GHz+ with exotic overclocking methods!) the higher Turbo Core speeds were inevitable.
We could also talk model numbers and question the need to increment from the 8000 series to the 9000 series when nothing has really changed this time around—the more sensible time to make that jump should have been when Vishera first launched, at least from the technology side of things. It would also be nice to see more of a unification of model numbers in AMD’s product stack, as we currently have FX-4000, FX-6000, FX-8000, and now FX-9000 parts all built on the Zambezi/Bulldozer and Vishera/Piledriver architectures. FX-4000 (two modules/four cores), FX-6000 (three modules/six cores), and FX-8000 (four modules/eight cores) made sense, but FX-9000 breaks that pattern. At present there are no updates being announced for the FX-4000 and FX-6000 families, but those will likely come. Will they be FX-5000 and FX-7000 parts now, or will they remain 4000/6000? If AMD were to use an Intel-style naming convention, Bulldozer was 1st Generation, Piledriver is 2nd Generation, and ahead we still have Steamroller (3rd Generation) and Excavator (4th Generation), but they’ve chosen a different route.
Whatever the name of the part, more than ever it’s important to know what you’re actually getting in terms of hardware before making a purchase—that holds true for AMD CPUs, APUs, and GPUs, but it also applies to Intel’s CPUs and NVIDIA’s GPUs, never mind the variety of ARM SoCs out there. The FX-9000 series is now AMD’s highest performance four module/eight core processor for their AM3+ platform, but it’s an incremental improvement from the FX-8000 series in the same way that the Radeon HD 8000 series is an incremental improvement on the HD 7000 GCN offerings. At least on the AMD CPU side of things we can generally go by the “higher numbers are better” idea, but that won’t always be the case.
AMD did not reveal pricing details on the new parts, and the press release says these new CPUs will “be available initially in PCs through system integrators”. They may replace the existing FX-8350 and FX-8320 eventually, but they will initially launch at a higher price depending on how AMD and their partners feel they stack up against the competition.
Read More ...
Synology DS1812+ 8-bay SMB / SOHO NAS Review
Synology recently refreshed their 8-bay SMB / SOHO NAS lineup with the DS1813+. Based on the same platform as the DS1812+ (Atom D2700), it added two extra network ports. However, due to the similarity in the underlying platform, the performance in most cases can be expected to be similar to last year's version, the DS1812+.
A number of Intel Atom D27xx-based NAS systems have been evaluated in our labs, even though we formally reviewed only one earlier this year, the LaCie 5big NAS Pro. The Thecus N4800 has made its appearance in a some benchmarks presented in our SMB / SOHO NAS testbed article. We are waiting for a firmware update to complete our 5big NAS Pro review, but, in the meanwhile, we have results from our evaluation of the Synology DS1812+ which was sent to our labs last year. In our experience, Synology manages to tick all the right boxes for the perfect consumer NAS (except for the pricing factor). Does the DS1812+ carry things forward, or do we have something to complain about? Read on to find out.
Read More ...
N-trig DuoSense Pen2: Who Needs a Stylus?
With the dawn of capacitive touch displays and the iPhone, iPad, iPod Touch, etc., some might think the day of the stylus is past. N-trig has been around since 1999 working on stylus hardware, and they disagree. Just what can you do with a stylus that you can't do with capacitive touch? Well, art, legible notes, and a real signature are three things that come to mind. Other than that? Yeah, capacitive touch works pretty well, doesn't it? Read on for our thoughts on N-trig's latest stylus, the DuoSense Pen2.
Read More ...
Ask AnandTech: Tablets at Work, What are Your Experiences?
The tablet market has grown tremendously over the past few years. What started as a content consumption device for consumers has transformed into a device that has started to pull sales away from traditional notebooks. The obvious next step for tablets is towards the enterprise and business users.
As my usage models tend to be a bit unusual, when tasked with finding out how people use tablets for work my initial thought was to go to you all directly. So, how do you or could you use use tablets for work? What possibilities do you see for tablet use in work going forward? Respond with your thoughts in the comments, a lot of eyes will be watching this discussion and you could definitely help shape design decisions going forward.
Read More ...
Overclocking Haswell on ASUS' 8-Series Motherboards [video]
After giving us a tour of ASUS' 8-series Haswell motherboards and its updated UEFI interface, JJ took us through overclocking a Core i7-4770K using ASUS' new software and UEFI tools. We get a good look at how auto-overclocking works, as well as what settings to pay attention to when manually overclocking Haswell.
Read More ...
Samsung Announces Galaxy S4 Zoom - 16 MP, Zoom, Makes Calls
Samsung's Galaxy camera came out almost a year ago, and it roughly mimicked the specs of an international SGS3 but included a unique camera system and body. Although the device couldn't make phone calls, it included cellular connectivity and was arguably the best in the first of a limited number of connected cameras competing with it. After many whispers, Samsung has announced the Galaxy S4 Zoom, an updated version of its connected camera line with a display and front face emulating the SGS4 but topped with another 16 MP camera system.
Camera Emphasized Smartphone Comparison |
|||||||
Samsung Galaxy Camera (EK-GC100) |
Nikon Coolpix S800c |
Nokia PureView 808 |
Samsung Galaxy S4 Zoom |
||||
CMOS Resolution |
16.3 MP |
16.0 MP |
41 MP |
16.3 MP |
|||
CMOS Format |
1/2.3", 1.34µm pixels |
1/2.3", 1.34µm pixels |
1/1.2", 1.4µm pixels |
1/2.3", 1.34µm pixels |
|||
CMOS Size |
6.17mm x 4.55mm |
6.17mm x 4.55mm |
10.67mm x 8.00mm |
6.17mm x 4.55mm |
|||
Lens Details |
4.1 - 86mm (22 - 447 35mm equiv) F/2.8-5.9 21x zoom + OIS |
4.5 - 45.0mm (25-250 35mm equiv) F/3.2-5.8 |
8.02mm (28mm 35mm equiv) F/2.4 |
4.3 - 43mm (24-240 mm 35mm equiv) F/3.1-F/6.3 10x zoom + OIS |
|||
Display |
1280 x 720 (4.8") |
854 x 480 (3.5") |
640 x 360 (4.0") |
960 x 540 (4.3") |
|||
SoC |
Exynos 4412 (Cortex-A9MP4 at 1.4 GHz with Mali-400 MP4) |
ARM Cortex A5(?) |
1.3 GHz ARM11 |
1.5 GHz Exynos 4212 |
|||
Storage |
8 GB + microSDXC |
1.7 GB + microSDHC |
16 GB + microSDHC |
8 GB + microSDHC |
|||
Video Recording |
1080p30, 480p120 |
1080p30 |
1080p30 |
1080p30 |
|||
OS |
Android 4.1 |
Android 2.3.6 |
Symbian Belle |
Android 4.2 |
|||
Connectivity |
WCDMA 21.1 850/900/1900/2100, 4G, 802.11a/b/g/n with 40 MHz channels, BT 4.0, GNSS |
No cellular, WiFi 802.11b/g/n(?), GPS |
WCDMA 14.4 850/900/1700/1900/2100, 802.11b/g/n, BT 3.0, GPS |
WCDMA 21.1 850/900/1900/2100, 4G LTE SKUs, 802.11a/b/g/n with 40 MHz channels, BT 4.0, GNSS |
|||
Last time around Samsung made things easy by supplying the sensor size, it's easy enough however to verify that the S4 Zoom is using the same 1/2.3" 16 MP sensor by going off of crop factor (5.64 crop factor for a 1/2.3" format sensor * 4.3 mm focal length, gives us their own published 24 mm focal length in 35mm-effective numbers). Likewise the availability of some photos published by a few websites with access to the hardware makes it easy to verify the same captured photo size of 4608 x 3456. I'm not surprised that Samsung kept sensor the same size given the desire to get the package thinner, but I find myself wishing that this did include a larger one for better indoor and low light sensitivity. There's thankfully still OIS (Optical Image Stabilization) onboard. The change in thickness also accordingly comes with a slightly higher F/# at the widest and most telephoto points, from F/2.8 to F/3.1 wide open, and F/5.9 to F/6.3 at telephoto. There's no way around the fact that on paper the S4 Zoom is a bit of a step down compared to Galaxy Camera, but it is thinner.
Of course, the real benefit is that it's a connected camera running Android 4.2 and including GNSS, 802.11n dual band WiFi, BT 4.0, NFC, and a 1.9 MP front facing camera. The biggest change of course is that unlike the Galaxy Camera, Galaxy S4 Zoom is capable of making voice calls directly. I could see myself sticking a SIM in Galaxy S4 Zoom and using it as a hybrid smartphone plus point and shoot device for sure, I just wish it was a step up over Galaxy Camera on the camera side of things. For that we'll have to wait and see if a Galaxy Camera 2 appears.
Source: Samsung
Read More ...
Computex 2013: ECS launches a Gaming Range: GANK, AGGRO and KILLSTEAL
One of the more esoteric showcasing at Computex was from ECS. In recent chipset and processor launches more and more motherboard companies are jumping on the bandwagon for a range of gming oriented motherboards (either part of the main motherboard stack or separate). ECS has sought the services of a branding agency and developed their Z87 range under the heading of GANK.
The word GANK, which I had no idea what it meant, is from the realm of MMORPGs, meaning ‘gang kill’. Under this range ECS will launch a few motherboards building on the Golden and Extreme ranges of the last generation.
The top board of the range will be the Z87H4-A3X Extreme, which offers an x8/x4/x4 + x4 PCIe layout for up to two-way SLI and four-way CrossfireX. Rather oddly it does not have an onboard VGA power connector, suggesting that ECS are attempting to draw 300W through the main 24-pin ATX power connector.
It is interesting to note that the Machine and Domination motherboards (for Extreme and Golden respectively) both contain Thunderbolt controllers.
The other ranges are for different segments – AGGRO for AMD and KILLSTEAL for the next Intel enthusiast range (Ivy Bridge-E) launched later this year.
For the main range of motherboards, ECS are styling them clear broad colors and listing them under the Essentials, Deluxe or Pro branding:
Also on the ECS stand we had the entrants for ECS’ ‘MODMEN’ competition, encouraging case modders from around the world to compete for a cash prize. They were certainly impressive!
Read More ...
Intel SSD DC S3500 Review (480GB): Part 1
We always knew that Intel would build a standard MLC version of its flagship S3700 enterprise SSD, and today we have that drive: the Intel SSD DC S3500.
Read More ...
Gigabyte at the Taipei 101
Part of the Gigabyte show over recent years as one of the main sponsors of Computex is their VIP suite on the 36th floor of the Taipei 101, reserved for media and special guests, with the main Gigabyte booth at one of the show halls nearer to the ground. My first meeting this week was at the Gigabyte VIP Suite, looking at most of the new hardware and talking to the important people who make the decisions.
Read More ...
ASUS 8-Series Haswell Motherboard Video Series
For the past week we've been publishing a series of videos recorded with ASUS about its new lineup of 8-series Haswell motherboards. With two more videos to come, I wanted a single location for all four once they post. This is that location.
Read More ...
Seagate Introduces NAS HDD: WD Red Gets a Competitor
Consumers looking to fill their SOHO / consumer NAS units with hard drives haven't had too many choices. Western Digital recognized early on that the dwindling HDD sales in the PC arena had to be made up for in the fast growing NAS segment. Towards this, they introduced the WD Red series (in 1TB, 2TB and 3TB capacities) last July. Today, Seagate is responding with their aptly named NAS HDD lineup. Just like the WD Red, these HDDs are targeted towards 1- to 5-bay NAS units. WD terms their firmware secret sauce as NASWare and Seagate's is NASWorks. NASWorks supports customized error recovery controls (TLER in other words), power management and vibration tolerance.
TLER helps to ensure that drives don't get dropped from the NAS and send the array into a rebuild phase. Seagate also claims that the firmware has an optimal balance for sequential and random performance.
Seagate does have a lead over WD in the capacity department. While the WD Red currently tops out at 3TB, Seagate's NAS HDD comes in 2 TB, 3 TB and 4 TB flavors. Seagate hasn't provided any information on the number of platters or spindle speed. Power consumption numbers are available, though. Average operating power is 4.3W for the 2TB model and 4.8W for the 3 TB and 4 TB ones.
Pricing is set at $126, $168 and $229 for the 2TB, 3TB and 4TB models respectively.
Update: Seagate has released an extensive product manual here. The 3TB and 4TB models have four platters each, while the 2TB model has two. The drives have a 3-year warranty.
Read More ...
Hands On with the Final NVIDIA Shield Hardware, Update: Now with Video
It seems like forever since CES 2013 when we first laid eyes on and played with NVIDIA’s Project Shield. Time flies, and since then Shield has dropped the Project and become just NVIDIA Shield. It’s not every day that we get to see a product go from being an early prototype with its own set of issues to final hardware ready to go into manufacturing in large numbers, but with Shield we’ve been given that very opportunity. Both Anand and myself got a chance to take a look at NVIDIA’s final Shield hardware.
Shield is of course NVIDIA’s reference Tegra 4 tablet turned part gaming controller slash PC streamer, part handheld gaming console running Android. Inside is of course the Tegra 4 SoC with four ARM Cortex A15s running at up to 1.9 GHz and 72 core GPU built on TSMC’s 28nm HPL process. It’s topped by a 5-inch 720p 294 PPI display and comes with 16 GB of internal storage, 2 GB of DDR3L, 2x2:2 802.11n dual band WiFi, and a microSD card slot. Atop all of that runs Android 4.2.1, but 4.2.2 is coming. Again Shield will always basically be running the latest stable version of the NVIDIA Tegra 4 BSP (Board Support Package) software, and updates come from NVIDIA. The specs really don’t tell the whole story though – and they haven’t changed – the key is in the subjective feel and ergonomics of Shield as a controller and its ability to be a standalone gaming console.
To say that Shield has come a long way is to put it lightly. The early hardware was very prototype-y, with a D-Pad that was mushy, triggers that didn’t feel right, analog sticks that weren’t tuned yet, and buttons that didn’t feel communicative enough. Even at Google I/O, NVIDIA was showing off Shield with hardware that wasn’t quite final yet, with a mushy D-Pad and analog sticks that still didn’t feel quite right to me.
The final Shield is much, much better.
The all-important D-Pad is much more communicative and clicky. I’m not much of a platform player however so I’m not sure if it will appease everyone, I’m also not sure if this is a hat switch or not, but I had no problem knowing whether I was pushing up down left or right. The triggers on the back likewise have much better resistance, and both bumpers have great crisp breaks. The remainder of the buttons likewise are clicky and responsive.
Talking about a controller for me isn’t so much finding things that are well done as things that fade into the experience and can be taken for granted, and in my hour or so with Shield I can’t think of anything that would frustrate me. That said it’s hard to really know where a controller or interface is going to fatigue you until you’ve used it for a few hours.
That brings me to the weight and mass question. NVIDIA moved around the batteries inside Shield, but the overall balance still feels good. What’s really different to me about Shield versus holding any of the other controllers (PS3, Xbox 360 Wired or Wireless) is how I can rest the whole console in basically both palms. The topology of the underside essentially rests on a shelf formed by your fingers. It’s hard to describe, but I’m reminded of my favorite Xbox 360 controller by Scuff gaming with the underside buttons which make you put your hands that way.
The final Shield hardware also has a significantly beefier display hinge, which makes the whole thing feel more snug and solid. There’s also that new metal Shield logo at the bottom between the front air intake (yes, Shield’s Tegra 4 remains actively cooled with a fan that kicks in after it crosses a certain temperature). If I could describe the final Shield hardware it honestly would follow that language – tougher and beefier and less delicate.
The buttons now navigate through Android much more effectively as well. Left and right bumper take you through pages in the launcher, reminiscent of the navigation on the Xbox 360. The left analog stick works like a virtual mouse and pops up a cursor, the D-Pad accordingly works like you’d expect. I found myself using those controls more than the touchscreen, though there are inevitably actions (both in games in Android) which require you take a hand off of balancing Shield and interact with the display.
I still however find myself wishing that the LCD display was bigger. Not because 5-inches diagonal is too small, or 720p too low res, but because the large black bezel around the thing makes it still look a rather awkward. I’m surprised NVIDIA couldn’t cut a deal with LG for some of their bigger 1080p panels (like the one from the Optimus G Pro) but I suspect there’s no easy way to get something better without compromising price point. The form factor of Shield also necessitates a landscape layout, and landscape layouts have been something of a rarity in Android since the departure of handsets with QWERTY hardware keyboards. There’s more than a few apps which only work in portrait or have some view which works only in portrait, but hopefully NVIDIA and Shield both will persuade some of this to change back.
The remainder of the equation for Shield’s viability as a console remains a software one, and here NVIDIA has to rely on the Android platform’s ability to deliver games customers want. Shield is effectively a gaming console running Android, something everyone talks about but nobody has really executed or delivered on entirely (sorry Ouya) – at least yet. Shield will ship with a few gaming related preloads, Sonic 4 Episode II THD and Expendable: Rearmed for games, plus Twitch TV and Hulu+ apps optimized for Shield. Of course there’s Google Plus and NVIDIA TegraZone as well. Beyond that though NVIDIA is relying on Android developers to make games which work and play well on Shield’s rather unique form factor. At launch, NVIDIA claims it will have 30 optimized titles available which work well with the controller and are optimized for landscape displays. Of course any game on Google Play with controller support will work on Shield.
The PC streaming aspect remains a second way to use Shield, and it’s a big sell in for a beefy desktop GPU, even if Shield becomes little more than a video decode sink and controller. In this setup, your NVIDIA based PC actually handles all of the 3D rendering before encoding the frame buffer as a video stream and sending it over WiFi to Shield. Presently, this personal GPU cloud only works in a 1-to-1 (one desktop to one Shield) ratio and it's only designed for local use.
Latency is impressively low and there’s minimal to no hitching. I played Borderlands on Shield connected to a Falcon Northwest box with GeForce Titan inside and found it more than playable. The obvious end goal however is to use some GPUs virtualized in the cloud with GRID and stream games to Shield, but it’s not quite there yet.
I find myself wishing that NVIDIA could launch Shield as a Nexus Experience device of some kind as a flagship platform with the new Google Play gaming services APIs as the context, but I’m not sure that’s in the cards. NVIDIA is steadfast with its “late June 2013” ship date, price remains $349 for NVIDIA’s foray into handheld gaming with Tegra 4.
Read More ...
Hands On with a Tegra 4i Phone
When we stopped by NVIDIA to play with Shield, they had another surprise in store for us with a Tegra 4i-based phone currently being shopped around from an unnamed ODM. The unusual “Brand” markings of course would be replaced with either an operator’s brand or some other OEM’s who wants to carry and support it. NVIDIA has talked about and shown its Phoenix reference design with Tegra 4i inside, this unnamed “Brand” phone includes the same platform but with a different PCB inside with an optimized layout for cost, manufacture, and reduced PCB area. NVIDIA was showing this very phone making CS voice calls on Taiwanese cellular networks just recently at Computex and hooked up to a base station emulator doing Category 4 LTE (150 Mbps downlink) with the same 4i silicon we saw earlier.
We got to play around with the brandphone (as I’m calling it) for a while. It’s impressively thin with a z height of 7.9 mm, width is 72mm and height is 138 mm. It’s sporting a 4.8-inch 720p display and includes 13 MP camera, 1 GB of LPDDR2 RAM, with options for 8/16/32 GB of storage. There’s of course LTE and HSPA+ band options for North American, European, and the other appropriate regions, though NVIDIA wouldn’t share exact band combinations it’s not too hard to make estimates given the transceiver details shared in earlier 4i disclosures.
NVIDIA claims this phone (and other 4i based designs) will be out in the Q1 2014 timeframe, with appearances on some operators earlier than that. Of course Phoenix has already been shown off working on AT&T’s network, reflecting its ongoing certification process for that operator. Pricing for the brandphone I’m told will be between $300–400 unsubsidized, though there will also be Tegra 4i-based phones priced as low as $200 unsubsidized.
Read More ...
2013 MacBook Air: PCIe SSD and Haswell ULT Inside
This morning Apple updated its MacBook Air to Intel's Haswell ULT silicon. The chassis itself didn't get any updates, nor did the displays. Both the 11 and 13 inch models retain their non-Retina 1366 x 768 and 1440 x 900 displays. Battery capacities haven't changed either, they're at 35Wh and 50Wh for the 11 and 13-inch models, respectively. The big changes however are on the CPU, NAND and DRAM fronts.
Read More ...
Apple Announces new AirPort Extreme and Time Capsule with 802.11ac WiFi
During the opening WWDC 2013 keynote, Apple announced a refresh of its AirPort Extreme and Time Capsule with support for 802.11ac. The two include 3x3:3 802.11ac with support for a PHY rate of up to 1300 Mbps and of course simultaneous 3x3:3 802.11n on 2.4 GHz (ac applies to 5 GHz only of course). From the outside, the new AirPort Extreme and Time Capsule look like a taller version of the AirPort Express which was released in 2012. The reason of course is to accommodate the 6 antennas inside, 3 for 2.4 GHz and 3 for 5 GHz for optimal orthogonality for 802.11ac's new beamforming.
It's unclear at this point what chipset is inside the new hardware, but from the feature support and I/O it's pretty safe to guess Broadcom. On the back are a USB 2.0 port for printers or attached storage, three gigabit Ethernet LAN ports, gigabit WAN, and power. There's no optical toslink or analog audio out on the back of the new hardware, that only gets included on the AirPort Express. I searched around Moscone for the new hardware but was told it wasn't out being shown off, however availability in Apple stores June 12, at $199 for the AirPort Extreme, $299 for a 2TB Time Capsule, and $399 for a 3TB Time Capsule.
I'm curious whether the new AirPort Extreme and Time Capsule are the same hardware inside, with a vacant SATA slot lurking inside.
At time of announcement Apple also noted inclusion of 802.11ac in the new MacBook Air and Mac Pro.
Source: Apple
Read More ...
Up Close with the New Mac Pro
In its keynote this morning, Apple teased its next-generation Mac Pro, due out later this year. Based on Ivy Bridge E, the new system will ship with two AMD FirePro GPUs with up to 4096 SPs and capable of delivering 7 TFLOPS of peak FP performance.
We got a close look at the chassis, which is 1/8 the size of the current Mac Pro. You lose any hope for internal expansion, but Apple outfitted the machine with three Falcon Ridge Thunderbolt 2 controllers to enable expansion via external storage and external Thunderbolt 2 expansion chassis options. Apple won't make any of its own Thunderbolt 2 expansion chassis, but you can expect that others will fill that void. With 20Gbps up/down on Thunderbolt 2, you should have enough bandwidth for any PCIe expansion.
Internally there are four DDR3 memory slots, as well as what looks like a proprietary PCIe SSD connector (I don't think it's M.2 unfortunately). Both GPUs are technically removable, but at least one is mounted as the same card as the PCIe SSD. Apple is putting every single PCIe lane available to use on the new Mac Pro.
Read More ...
WWDC 2013 Keynote Live Blog
Brian and I are live at Moscone West in San Francisco for Apple's WWDC 2013 Keynote. The keynote starts at 10AM PT/1PM ET. Check back here for our Live Blog!
Read More ...
Computex 2013: ASRock's M8 mini-ITX Gaming PC
After thoroughly enjoying Falcon NW's Tiki system, I was pleased to see ASRock showcase a mini-ITX tower/desktop of its own at Computex: the M8. Inside is an ASRock mini-ITX 8-series motherboard and Haswell CPU. ASRock will sell the chassis as a barebones PC including motherboard and PSU. The chassis itself was designed in cooperation with BMW design (hence the name M8, M for BMW M Series and 8 for 8-series chipset).
Large discrete GPUs are supported via a riser card on the motherboard (ASRock claims the M8 will accept a Titan). To install a discrete GPU you actually have to remove the top of the tower, giving you access to the PCIe riser card inside.
Around the front of the M8 chassis is an iDrive-like knob that functions as a power button as well as a quick way to cycle through all of the case's menu features. The side panel is screwless and is held on by magnets.
ASRock expects the M8 barebones system to etail for $499.
Read More ...
Computex 2013: ASUS Takes on Lenovo with Pro Line of Business Notebooks
ASUS has Lenovo square in its sights with its ASUS Pro brand of business notebooks. Based on Ivy Bridge silicon (longer upgrade cycles in business, Haswell will come later), the ASUS Pro lineup is clearly designed to mimic the no-frills design of Lenovo's ThinkPad line. Although not new to Computex, this is the first time we've talked about ASUS Pro on the site.
Although you can't really tell from the photos here, the ASUS Pro systems actually look and feel great in person. I don't know if they're up to snuff with the ThinkPads for long term use, but given how far ASUS has come as a consumer manufacturer I suspect pursuing the business market with that same relentless focus will only yield good results.
The notebooks will ship with either Windows 8 Pro or 7 Pro, a clear nod to those businesses who prefer sticking with the previous generation OS. The BU400A model I checked out at Computex featured a 14-inch display, with options for 1366 x 768 or 1600 x 900 panels.
The BU400A is complete with up to a 256GB SSD, NVIDIA NVS 5200M graphics and a 53Wh battery.
Read More ...
Patriot Drives Further into Mobile with FUEL+ External Batteries
With the DRAM industry no longer as interesting as it once was (although I'd argue that with Haswell, high frequency DRAM is exciting once more - if only Intel would do a GT3 desktop SKU), Patriot has shifted its sights to building accessories for mobile. Its first attempt was the Gauntlet external wireless HDD, but at Computex this year we saw Patriot's expansion into mobile with its FUEL+ line of external batteries.
FUEL+ is available in three different form factors/capacities. The 1500 mAh version features an integrated Lightning connector for use on the iPhone 5, iPad mini and iPad 4:
The 2200 and 3000 mAh versions feature a single 1A USB port:
The 5200, 6000, 7800 and 9000 mAh versions come with two USB ports rated for 1A and 2.5A current delivery; both ports can be used simultaneously:
FUEL+ implements the USB BC 1.2 spec, and will be available later this month.
Read More ...
NASA Introduces Asteroid Grand Challenge to Protect Earth
NASA also issued a request for information (RFI) for ideas on locating, redirecting, and exploring asteroids
Read More ...
Lenovo Introduces $900, 15" ThinkPad S531 Ultrabook
It will be released in the UK first
Read More ...
Quick Note: Verizon May Become Canada's Fourth Major Carrier
Canada has been searching for a fourth to compete with its top tier carriers for greater consumer choice and competitive prices
Read More ...
Just How Powerful is the Xbox One? Microsoft is Confused
Meanwhile Xbox 360 continues to sell strong
Read More ...
Tesla to Show Off EV Battery Swap Tech This Week
Tesla will demo its battery swap technology for the Model S
Read More ...
Microsoft Offering Schools/Colleges $199 Surface RT Tablets
It's a limited offer that runs through the end of August
Read More ...
Sprint Chills to DISH, Sues Over Clearwire Offer
DISH's effort to gain leverage on Sprint in acquisition bid may have backfired
Read More ...
AMD: 28 nm Server Steamroller APUs, ARM CPUs Coming Next Year
2014 looks to be a big year for CPU maker
Read More ...
Source: Don't Worry, NSA Spies on "99 Percent" of Americans' Locations, Call Records
WSJ report cites sources close to agency saying holes in interception are filled by data grabs at a lower level
Read More ...
Eddy Cue Blames Loyalty to Steve Jobs For Deal That Raised eBook Prices
Higher prices? Customers might not have noticed, top Apple exec
Read More ...
Bosch Selling $3,000 Wireless EV Charging System
It's only for the Chevrolet Volt and Nissan Leaf
Read More ...
BMI Files Lawsuit Against Pandora Over Royalty Fees, Radio Station "Stunt"
BMI thinks Pandora is trying to be crafty with its recent radio station purchase in South Dakota
Read More ...
IPhone Finally Receives Office Mobile App -- For Office 365 Subscribers Only
It's also only for the iPhone, not iPad
Read More ...
Nintendo Game Designer Says Its Products Should be Seen as Toys -- Kept for Years to Come
Miyamoto wants gamers to have access to their Nintendo titles for a long time
Read More ...
Available Tags:NVIDIA , GPU , Tablet , ASUS , CUDA , AMD , Server , Motherboards , Samsung , Galaxy , Gaming , Intel , SSD , Gigabyte , Seagate , MacBook , Apple , Mac , Lenovo , NASA , Lenovo , Xbox , Microsoft , Microsoft , Wireless , IPhone , Nintendo ,








































No comments:
Post a Comment