
EVGA expands the SuperNOVA G2 PSU series
As users are becoming more and more aware of how PSUs operate and what the real energy requirements of their systems are, sales of high wattage units decrease in relevance to middle range units. Many manufacturers realize that and they began marketing high performance products of reasonable power output and pricing instead of focusing their efforts on high output units. In that light, EVGA expanded their very popular G2 PSU series downwards, adding 550W and 650W models to it.
EVGA's G2 series is synonymous with the excellent balance between cost, quality and performance. We have seen their capabilities in our review of the 850W version. After all, there is good reason why the Super Flower Leadex platform is so popular. The new 550W and 650W models are physically smaller but share the same features, so it is very likely that they are based on a Super Flower platform as well.
According to EVGA, the main features of the new 550 G2 and 650 G2 PSUs are:
- 80 PLUS Gold certified, with 90% (115VAC) / 92% (220VAC~240VAC) efficiency or higher under typical loads
- Highest quality Japanese brand capacitors ensure long-term reliability
- Fully Modular to reduce clutter and improve airflow
- NVIDIA SLI & AMD Crossfire Ready
- Heavy-duty protections, including OVP (Over Voltage Protection), UVP (Under Voltage Protection), OCP (Over Current Protection), OPP (Over Power Protection), and SCP (Short Circuit Protection)
- Ultra Quiet Fan with ECO Intelligent Thermal Control Fan system (Zero Fan Noise < 45°C)
- Unbeatable 7 Year Warranty and unparalleled EVGA Customer Support.
The new G2 series units are available as of the 12th of June.
Read More ...
NVIDIA Acquires Game Porting Group & Tech From Transgaming
While NVIDIA’s core businesses and gaming have been inseparable since the start, it’s only relatively recently that NVIDIA has become heavily involved in game creation itself, and not just supplying the hardware that games are played on. The launch of the company’s Tegra ARM SoCs, their SHIELD product lineup, and the overall poor state of the Android gaming market has led to the company investing rather significantly in bringing higher quality games over to SHIELD and Android devices. This has culminated in NVIDIA paying for the Android ports for a number of games, some of the most famous including the Android ports of Valve’s Half-Life 2 and Portal.
Meanwhile with the launch of the SHIELD Android TV, NVIDIA is essentially doubling-down on Android gaming as part of their efforts to become the premiere Android TV set top box. And now as part of those efforts, the company has announced that they are acquiring the Graphics & Portability Group (GPG) from game tool developer Transgaming.
Transgaming is best known for their work developing Cider, a WINE-derived Windows compatibility layer used to quickly port Windows games over to OS X. With the rise of Apple’s fortunes and the move to x86, Transgaming has been responsible for either directly porting or supplying Cider to developers to bring a number of Windows games over to OS X. However in a blink-and-you’ll-miss-it moment, back in March of this year the company announced that they were also going to get in to using their technology and expertise to port games over to architectures, partnering with NVIDIA to bring Metal Gear Rising: Revengeance to SHIELD Android TV.
Now just 3 months later NVIDIA is acquiring the GPG outright from Transgaming. This acquisition will see the group open a new office in Toronto, while structurally they are folded into the NVIDIA GameWorks division. And although NVIDIA doesn't state what precisely they intend to do with the group and its technology beyond the fact that the “acquisition will enrich our GameWorks effort,” it’s a safe bet that NVIDIA intends to do more game ports for their SHIELD devices. Given their existing (if short) relationship, the acquisition is not too surprising, however it is a bit interesting since the bulk of the group’s experience is with porting games among different x86 OSes, not porting games to new architectures entirely.
As for Transgaming, having sold the GPG to NVIDIA, the company has retained their SwiftShader (software 3D rendering) technology and their GameTree TV business. Transgaming has indicated that they are going to focus on providing apps for the Smart TV market, which they see as a greater growth opportunity than porting games.
Finally, while this acquisition will undoubtedly be a big deal for NVIDIA’s efforts to bring more major games to SHIELD, perhaps the more profound ramifications of this deal will be what it means for Mac gaming. Though NVIDIA doesn’t definitively state what they will be doing with Cider, the fact that they have their own platform to worry about certainly gives pause for thought. There are a large number of games that have received native Mac ports over the years, but Cider has still been used in everything from Metal Gear Solid to EVE Online. If Cider becomes unavailable to developers, then this may cut down on the number of Windows games that get ported to OS X, especially those games where marginal sales may make a native port impractical. In either case with this acquisition NVIDIA seems to have co-opted a lot of the technology and relationships behind Mac game porting, which should be a boon for their SHIELD platform.
Read More ...
LIFX White 800 Smart Bulb Review
The Internet of Things (IoT) revolution has sparked an increased interest in home automation. Lighting is one of the major home automation aspects. LIFX is one of the popular crowdfunded companies in this space to have come out with a successful product. The success of their multi-colored LED bulbs brought venture capital funding, allowing them to introduce a new product in their lineup - the White 800. In this review, we take a look at the White 800 platform and our usage experience.
Read More ...
JMicron SSD Controller Roadmap: JMF680 SATA 6Gbps & JMF815 PCIe Controllers Next Year
JMicron is getting ready to ship its new JMF670H controller to its customers and we also have reference design samples in for testing, but in its suite at Computex JMicron shed light to its plans for future controllers. We stopped by JMicron last year as well and the plans have since changed a bit.
JMicron is already working on the successor of the JMF670H, which will simply be called JMF680. That's still a SATA 6Gbps design, but it will bring support for TLC NAND thanks to what JMicron calls 'advanced ECC'. JMicron is confident that its ECC implementation will be competitive against the LDPC engines that its competitors have and ultimately I believe that LDPC is more of a marketing gimmick at this point because everyone's ECC algorithms and implementations are slightly different anyway, but the market is associating strong ECC and TLC enablement with LDPC.
Another new feature in the JMF680 is increased capacity support that will go to up to 2TB. That is thanks to the updated (and larger) DRAM controller, which can now support up to 2GB as modern drives typically need about 1MB of DRAM cache per 1GB of NAND. The four NAND channels will also get an upgrade to Toggle 3.0 and ONFi 4.0 standards to support the upcoming NAND dies with faster interfaces. The JMF680 also supports Write Booster, which is JMicron's SLC caching feature that debuts in the JMF670H (more on that in our upcoming JMF670H review).
On the PCIe side JMicron has canceled the JMF810 and JMF811 controllers, and will now be focusing solely on the JMF815. JMicron made the decision to concentrate on the value segment and thus the JMF815 is a PCIe 3.0 x2 design with four NAND channels (no NVMe, unfortunately). A four-lane design would have required moving to 28nm process node, which would have increased the cost substantially and the packaging would have to move away from BGA to FCBGA (used by e.g. Phison and SandForce in their upcoming PCIe controllers) that would further increase the cost. I think it's a good play from JMicron to focus on a segment that isn't as populated because right now everyone is focusing solely on performance with PCIe, but ultimately cost and power consumption will be a major factors in widespread adoption and JMicron should have an advantage there if the JMF815 is executed well.
First engineering samples of the JMF680 and JMF815 are expected to be ready in Q4'15 with first retail products entering the market in early 2016.
One of the trends I saw at Computex was the move towards DRAM-less SSD controllers. The JMF608 has been relatively popular in China given its ultra-low cost and its successor, the JMF60F, will be available within the next few months. It features an improved ECC engine and a larger capacity support as well as a new, cheaper QFN packaging. Following this trend, I wouldn't be surprised if JMicron also has plans for DRAM-less versions of the JMF680 and JMF815.
All in all, JMicron has a pretty solid roadmap for 2016. It's not aiming to be the performance leader, but to offer cost efficient designs for the value segment. We will have to wait and see how JMicron executes its PCIe controller, but in the meantime stay tuned for our JMF670H review that will be up in the coming weeks!
Read More ...
Oculus Rift Controllers, VR Games, And Software Features Announced
On the eve of E3, Oculus held a livestream to announce some more details of the upcoming Oculus Rift Virtual Reality headset. Just about a month ago, they announced that they were targeting a Q1 2016 release, and with that time fast approaching, they have given some more details on the unit itself, as well as what kind of experiences you can expect with it. Oculus has re-affirmed the Q1'16 launch date, and now we finally know the specs for the retail consumer unit.
One of the key points they brought up was that the unit itself needs to be comfortable, and part of that comfort is weight. January seems like a long time ago when I got to try out the Crescent Bay version of the Rift, but at the time I was impressed with how it felt, and I don’t recall the weight at all which I guess is the point. The final, consumer version of the Rift in turn is close to the Crescent Bay version, with further enhancements for both the electronics and the overall fit itself to bring down the weight and make it more comfortable.
Audio is also a big part of the experience, and the included headphones on Crescent Bay were quite good. For the consumer version Oculus is going in a similar direction, but today they have also confirmed that you will be able to wear your own headphones as well if you prefer that. The directional audio is a key piece to the immersion and the Oculus team has done a great job with that aspect.
Another part though is the displays. When we met with Oculus’s CEO Brendan Iribe at CES, one of the interesting things he told us was that they have found that by interleaving a black frame in between each video frame, it can prevent ghosting. In order to do this though, the refresh rate needs to be pretty high with the unit we tested running at 90 Hz. Today they announced a tiny bit about the hardware, and the Oculus Rift will ship with two OLED panels designed for low-persistence. Oculus has previously commented that they're running at a combined 2160x1200, and while they don't list the individual panel size, 1080x1200 is a safe bet. The OLED panels are behind optical lenses which help the user focus on a screen so close to their eye without eye strain, and the inter-pupil distance is important. There will be an adjustment dial that you can tweak to make the Rift work best for you.
Tracking of your head movement is done with the help of an IR LED constellation tracking system, unlike the Hololens which does all of the tracking itself with its own cameras. This makes installation a bit more difficult but should be more precise and reduce the overall weight of the head unit.
For those that wear glasses, the company has improved the design to better allow for glasses, and they also make it easy to replace the foam surrounding the headset.
One thing that was really not known yet was what kind of control mechanism Oculus was going to employ. In the demos I did at CES, there was no interaction, and you were basically a bystander. Oculus announced today that every Rift will be shipping with an Xbox One wireless controller and the just announced wireless adapter for Windows. This is a mutually beneficial agreement to say the least, with Microsoft getting in on the VR action and Oculus getting access to a mature controller design. Oculus even stated that the controller is going to be the best way to play a lot of VR games. However they also announced their own controller for a new genre of VR games to give an even more immersive experience.
Oculus Touch is the name of new controller system that Oculus has come up with. Each controller has a traditional analog thumbstick, two buttons, an analog trigger, and a “hand trigger” input mechanism. The two controllers are mirror images of each other, with one for each hand. They are wireless as well, and use the IR LED tracking system as well in order to be used in space. The controllers will also offer haptic feedback so that they can be used to simulate real world touch experiences. They also detect some finger poses (not full finger tracking) in order to perform whatever task is assigned to that pose. These should be pretty cool and I can’t wait to try them out.

Hardware is certainly part of the story, but software is going to be possibly an even bigger part. The Rift needs to launch with quality games, and it looks like Oculus has some developers on board with EVE: Valkyrie, Chronos, and Edge of Nowhere being some of the featured games.
They also showed off their 2D homescreen which they are projecting into the 3D rift world. There will be easy access to social networks and of course multiplayer gaming in virtual reality.
In addition to the Xbox controller, Oculus has also worked with Microsoft to enable the upcoming Xbox Game Streaming into the Rift, so that you can be fully immersed. This will not magically make Xbox games 3D VR worlds, but instead will project the Xbox game into a big 2D screen inside the Rift and block out all distractions.
I’ve been a bit of a VR skeptic, but my time with the Rift was pretty cool. I can see a lot of applications for this outside of gaming, but of course gaming is going to be a big part of VR and Oculus looks to be lining up a pretty nice looking launch. A big part is going to be quality titles for the Rift and Oculus is working hard on that aspect. The hardware is now pretty polished.
Source: Oculus
Read More ...
Nantero Exits Stealth: Using Carbon Nanotubes for Non-Volatile Memory with DRAM Performance & Unlimited Endurance
The race for next generation non-volatile memory technology is already on at full throttle. We covered Crossbar’s ReRAM announcement last year and last week a very exciting company with a different non-volatile technology exited stealth mode and shed light on its technology and commercialization plans. The company is called Nantero and it’s been developing its NRAM technology for well over a decade now.

Before we talk about the technology itself, let’s briefly discuss the company and its key persons as Nantero is probably an unfamiliar name to many (it was for me, at least). The company was founded by Greg Schmergel, Dr. Tom Rueckes and Dr. Brent M. Segal in 2001. Mr. Schmergel and Dr. Rueckes are both still with the company and serve as CEO and CTO respectively, but Dr. Segal left the company in 2008 as a part of Nantero's Government Business Unit acquisition by Lockheed Martin. Mr. Schmergel is a well renowned serial entrepreneur who founded ExpertCentral that was later acquired by About.com where Mr. Schmergel served as a Senior Vice President before co-founding Nantero. While Mr. Schmergel brings valuable business expertise to the company, the technology comes from Dr. Tom Rueckes who is a Harvard Ph.D in chemistry and the inventor of NRAM technology.
The Board of Directors includes several semiconductor industry veterans. Mr. Lai was one of the leading developers of NAND technology at Intel and also led Intel’s Phase Change Memory (PCM) team. Dr. Makimoto is a former Chief Technologist of Sony and Hitachi and Mr. Scalise is actually the former President of Silicon Industry Association (SIA) and also served as an Executive Vice President at Apple briefly in the late 90s. Mr. Raam may too be a familiar name to some since he is the former CEO of SandForce (the SSD controller company) that is now owned by Seagate.
The Technology
It goes without saying that Nantero is packed with semiconductor experience and know-how, but its technology isn’t any less interesting. NRAM is made out of carbon nanotubes, which is the strongest material known to man and provides far better thermal and electrical conductivity than any other known material.The way NRAM works is in fact relatively simple. Essentially there are two nanotubes, which have low resistance when in physical contact and high resistance when separated. The amount of resistance then determines whether the cell is considered to be programmed as ‘0’ or ‘1’. Program operation (or “SET” as Nantero calls it) works by applying a voltage on one of the nanotubes, which will then attract the other nanotube and create a bond. The SET operation is very fast and takes only picoseconds, which is on par with or better than DRAM latency. The bond is kept in tact by Van der Waal's interactions and is practically immortal with data retention terms even in 300°C is over ten years. In an erase operation (or RESET as Nantero calls it) the voltage is simply applied in the other direction, which will “heat up” (given the scale it’s more like vibration) the nanotube contacts and cause them to separate. Given that carbon nanotubes are one of the strongest materials in the world, the write/erase endurance is practically infinite as independent university study has shown Nantero’s NRAM technology to have over 1011 P/E cycles (for your information, 1011 translates to 100 billion).
The other great news is that carbon nanotubes are extremely small. One nanotube can have a diameter of only 2nm and the pitch between the two nanotubes in off-state can be an even tinier 1nm, so the technology has potential to scale below 5nm. NRAM can also scale vertically, or go 3D, and since the cell structure and manufacturing process are both quite simple, 3D stacking should, in theory, be much easier compared to what 3D NAND is today with no need for high aspect ratio etching as an example.
The Manufacturing Process
The process of making an NRAM wafer starts by taking a normal CMOS wafer with the normal cell select and array line circuitry, which is then spin coated with carbon nanotubes. Carbon nanotubes are grown from iron that would normally contaminate a clean room, thus Nantero had to develop a patented process that creates ‘pure’ carbon nanotubes with less than one out of billion particles being foreign (the standard for the highest quality clean rooms). Nantero has worked hard in the past two years to bring the cost of carbon nanotubes down and currently the company says that the nanotubes have a negligible impact on chip cost, meaning making NRAM isn't inherently more expensive than any other semiconductor.
Top-down SEM of NRAM
With
the nanotubes on the wafer, the top electrode is deposited on top of
the nanotubes, followed by the photoresist, which is then patterned
using a single mask. Finally the wafer is etched to cut the nanotubes
into smaller pieces (i.e. more memory cells) and that’s it in a
nutshell. Obviously there are other general semiconductor processing
steps involved, but those are the same for all memory technologies, so
the fundamental process of manufacturing NRAM isn’t that complex. All
that is needed is a normal CMOS fab because the NRAM process requires no
special or additional tools.Fortunately, NRAM isn’t just a technology that exists on paper. Nantero’s NRAM process has already been installed in seven production CMOS fabs ranging from 20nm to 250nm and limited production has been taking place for several years now, although only in small few megabit capacities. As a matter of fact, Nantero completed a successful space test with NASA on Space Shuttle Atlantis back in 2009 where NRAM operated without any shielding throughout the trip without any errors despite the intense radiation, because as I mentioned earlier, the nanotube bonds are practically unbreakable and are not affected by heat, magnetism, radiation and the like.
Nantero’s Business Plan: Bringing NRAM to Everyone
Because Nantero is an IP licensing company, it relies solely on its partners for production. It's a logical strategy because a decent sized fab requires an investment in the order of billions of dollars and in the end the company would have to compete against Intel, Samsung and the rest of the semiconductor giants. Actual end products will be sold under the manufacturer's brand (e.g. Intel), so you won't see any Nantero branded products on the market.Nantero isn't disclosing any of its partners at this point as most of them are still developing products that have the potential for higher volume production. While Nantero has its own chip team that is developing high capacity (several gigabits) dies, every partner is also doing its own work to implement NRAM at a larger scale, which makes sense given that the big semiconductor companies have far more resources and are familiar with high capacity memory devices.
Aside from semiconductor companies, Nantero has also partnered with several more consumer-facing companies to develop concepts and products around NRAM technology. Since NRAM provides the same level of performance as DRAM but is non-volatile, NRAM could open the doors for products that aren't achievable (at least properly) with today's NAND and DRAM technology. As examples Nantero mentions 3D smartphones and commercial 3D printers (although to be frank both already exist to some extent), but practically anything that's handicapped by IO performance and volatility can be fixed with NRAM in the future.
Since it will take several years before NRAM is even close to modern NAND capacities, Nantero has a three step strategy of bringing NRAM to the market. In the first step Nantero is simply offering a class of memory (both standalone and embedded) that has DRAM's performance characteristics and NAND's non-volatility. Technically that means NRAM is competing against current MRAM and ReRAM products for a specialized niche market that really needs high performance and non-volatility. The consumer market is obviously not one of those and even for the enterprise NRAM is likely too small capacity and expensive, but the industrial and especially space/military applications should benefit from NRAM despite the high initial cost.
The next step is to grow NRAM to gigabit-class capacities and offer a non-volatile alternative to DRAM. Going to gigabit-class certainly opens the doors for NRAM as a mainstream memory because it could be used for a variety of caching applications that benefit from non-volatility (SSDs with their DRAM caches for NAND mapping table are a prime example). Tape out of first gigabit NRAM wafers is still about 18 months away, so I would expect to see something shipping perhaps in late 2017 or 2018.
The final step, of course, is a terabit-class die to replace NAND (FYI, Samsung is projecting 1Tbit NAND die in 2017). Achieving that requires work on both lithography scaling and 3D integration technologies because such a high capacity die is only economical with either multiple layers or advanced lithography, or both.
NRAM also has the potential to operate in MLC mode for further density improvements, but for now Nantero is focusing on scaling NRAM down and adding layers through 3D to increase density. Once the work on those two is done and has been implemented to a production fab, Nantero will start commercializing NRAM MLC technology, but that is likely at least several years away.
Final Words
The announcement is intriguing to say the least. From a technology standpoint NRAM sounds very exciting because it's effectively bringing us non-volatile DRAM performance, and better yet the cell design is scalable whereas DRAM has major struggles going below 20nm. I like the fact that Nantero has decided to go with IP licensing model because it means that NRAM is a technology available to everyone. The reason why DRAM and NAND are where they are today is because there are multiple companies producing them, resulting in competition with billions of R&D dollars.I wonder if any of the big semiconductor companies has partnered with Nantero yet. Most of them have been tight-lipped about their post-NAND plans, but maybe Nantero's announcement will sooner than later force the companies to talk about their strategies. Obviously a lot depends on how far 3D NAND can efficiently scale, but from what I have heard the transition to next generation memory technologies should begin around 2020. The future of memory isn't here yet, but it's certainly getting closer and it will be interesting to see what technology ends up taking the crown.
Read More ...
Plextor M7e PCIe SSD to Ship in Q3, M7V TLC SSD in 2016 & New Software Features
Plextor first showed off the M7e at CES earlier this year and at Computex we got an update on the release schedule. Plextor is now aiming for Q3 release, meaning that we will likely hear the final release at Flash Memory Summit in August. Specifications have not really changed since the M7e still utilizes the same Marvell PCIe 2.0 x4 AHCI controller with performance rated at up to 1.4GB/s read and 1GB/s write as well as up to 125K random read and 140K write IOPS. M7e will be available in both M.2 and PCIe card form factors with capacities range from 256GB to 1TB, so the M7e may very well be the first M.2 2280 drive to break the 1TB barrier.
Regarding the TLC drive M6V (or M7V as Plextor now calls it), Plextor is taking its time to fine tune the firmware to squeeze every megabyte of performance out of the drive and more importantly ensure high reliability and endurance. Plextor told me that its firmware can boost the endurance to 2,000 P/E cycles with 15nm TLC, so it the claim holds true then I'm fine with Plextor taking a little longer and pushing the release to 2016.
On the software side, Plextor actually had three new items to show. The first one is updated PlexTurbo, which now carries version number 3 and increases the maximum cache size to 16GB. The cache size is also now user adjustable and supports multiple disks, so one can decide what Plextor SSD to speed up with PlexTurbo.
The first new addition to Plextor's software suite is PlexVault, which creates a hidden partition for storing sensitive data. The partition is completely hidden and isn't even visible in Disk Management, so other users won't even know that such hidden partition exists. Accessing the partition works through a hot key, although a password can also be entered to protect the hidden partition from accidental access. I'm not sure how useful the feature really is, but I guess it creates another layer of security for NSFW (not safe for the wife) content for those who may need it.
The final piece of new software is PlexCompressor, which is an automated compression utility. If a file is not accessed for 30 days, PlexCompressor will automatically compress the file to increase free space. The file will then be uncompressed when accessed, which obviously takes a bit of the free space since the file will now be stored in uncompressed format for another 30 days. The compression is transparent to the user and is done fully in software (i.e. by the CPU), so it's not SandForce-like hardware compression. There is no impact on SSD performance, although as compression consumes some CPU cycles there may be impact on CPU heavy workloads and especially battery life. Out of the three pieces of software Plextor has, I think PlexCompressor is the most potent because it results in concrete extra free space for the end-user and with SSD prices still being relatively high (compared to HDDs) it makes sense to get the most out of the storage one has.
Read More ...
Synology Launches RC18015xs+ / RXD1215sas High-Availability Cluster Solution
Synology is no stranger to high-availability (HA) systems. Synology High Availability is touted as one of the features that differentiate Synology's NAS units from other vendors' for small business and enterprise usage. Put simply, Synology HA allows two NAS units (same model) to be connected to each other directly through their LAN ports, while also being connected to the main network through their other LAN ports. One of the NAS units is designated as the active unit, while the other passively tracks updates made to that unit. In case of any failure in the active unit, the other one can seamless take over without any downtime.
Synology is now extending this concept to a high-availability cluster. The products being introduced today are the RackStation RC18015xs+ compute node and the 12-bay RXD1215sas expansion unit.
Unlike Synology's traditional RackStation products, the compute node doesn't come with storage bays. They are just 1U servers sporting a Xeon E3-1230 v2 (4C / 8T Ivy Bridge running at 3.3 GHz) CPU. The specifications of the RC18015xs+ are provided below.
The PCIe 3.0 x8 slot allows for installation of 10 GbE adapters, if required. The compute node is priced at $4000. The expansion unit comes with the following specifications, and it is priced at $3500.
In order to set up a high-availability cluster, two compute nodes and at least one expansion unit is needed (as shown in the diagram on top). The operation of the cluster and high-availability features are similar to Synology HA. Performance numbers are of the order of 2,300 MBps and 330K IOPS using dual 10G adapters. All DSM (v5.2) features such as SSD caching and virtualization certifications are available. High-availability is also ensured with redundancy of hardware components (PSUs / SAS connectors / fans etc.).
The other important aspect of today's announcement is the usage of btrfs for the file system. As of now, the only COTS NAS units with btrfs support in this market segment have been those from Netgear and Thecus. So, it is heartening to see Synology also adopting it. btrfs brings along many advantages, including snapshots with minimal overhead and protection against bit-rot. The unfortunate aspect is that it is currently only available in this high-availability cluster solution. We hope it becomes an option for other NAS models soon.
Coming to the pricing aspect, we see that consumers need to buy two compute nodes and one expansion unit at the minimum, bringing the cost of a diskless configuration to $11500. This is pretty steep, considering that Quanta's cluster-in-a-box solutions (with similar computing performance) can be had along with Windows Server licenses for around half the price. Synology's products have always carried a premium (deservedly so for the ease of setup and maintenance), so it is not a surprise to see the pricing strategy here.
Read More ...
Phanteks Computex 2015 Booth Tour
Phanteks had its new Enthoo EVOLV series cases in both mini-ITX and full ATX form factors on display in its suite at Computex. The mini-ITX version is made out of steel and available in two color schemes (white-black & red-black). There's a single 200mm fan installed in the front with room for two 120/140mm fans at the top and one at the back. It can take a 330mm GPU and a 200mm CPU cooler, so you can build a fairly powerful system. One of the more special aspects of Phanteks' cases is the PSU cover, which essentially hides the PSU cables to create the clean look that many desire.
The ATX version is fully made out of 3mm thick aluminum (despite the side window). For some reason the design and overall build remind me of the original Mac Pro, which isn't a bad thing at all.
One of the unique aspects in the case are fully modular hard drive bays. I have to say I really like the concept because typically many ATX cases easily have +5 irremovable bays, but in reality most users probably won't use more than one or two. Phanteks includes three with the case, but obviously the user can buy extra ones if needed.
There are actually two hard drive bays and SSD brackets behind the main chamber, so in most cases the user won't even need the modular HDD bays and can thus maximize airflow by not having anything between the fans and motherboard.
Phanteks also had a prototype of a dual-system case that can take a full ATX motherboard and a mini-ITX one. The interesting part is that Phanteks is working on a power splitter, so the two systems could be powered by a single PSU to save on space and cost. As you can see, the concept isn't really final yet because Phanteks needs to some custom cabling in order to be able to close the case since right now the cables come off too much. It's a niche product for sure, but the idea of running two full systems inside a decent size case is definitely alluring. See the gallery for more shots of the prototype and other cases Phanteks had to show!
Read More ...
Lian Li Computex 2015 Booth Tour
Lian Li had close to a dozen new or prototype-level cases on display at Computex. I've added most in the gallery at the end of this post, but I'll go through a few of the highlights here as well.
The first one is the PC-V33A, which is a box-like case in which the motherboard is mounted horizontally. The top cover is made out of single piece of aluminum, but it opens up for easy installation.
The case above is more of a conceptual prototype where Lian Li is playing around with a taller case design. Instead of having hard drive bays next to the motherboard, there's room for four hard drives in the top chamber, which allows for better airflow in the main chamber.
One of the more down to earth designs is the PC-K621, which is also Lian Li's first non-aluminum case. Traditionally Lian Li has kept the Lancool brand for value cases, but it seems that the company is trying to consolidate everything under a single brand now. The PC-K621 is made out of steel and plastic, but it does feel very sturdy and despite the fact that the front panel is made out of plastic, it has a metal-like look in it. Pricing will be about $70, so while it's not exactly a value case it's still considerably cheaper than the rest of the Lian Li cases.
One minor change Lian Li has made to its cases is changing the power button material from plastic to aluminum. The company received many complaints of the power button not having the same feel as the rest of the case, so as any respectable company Lian Li listened to its customers and made the change.
And obviously no Lian Li booth tour is complete without the computer desk case. Lian Li has modified the design a bit so that one can now easily sit with legs under the table, which was one of the issues the earlier cases had (note: that's Kip Hartwell, Lian Li's marketing rep, in the photo, not me). The desk is still expensive, though, and Lian Li doesn't really have any plans of making a value model, but it's a relatively small niche anyway.
Check out the gallery above if you're interested in seeing what else Lian Li had to offer!
Read More ...
Here's the Nokia Moonraker Smartwatch Which Got Killed in Favor of Microsoft Band
Smartwatch project was scrapped after the acquisition
Read More ...
Sources: Hack on Fed. Database Lost 4.1M Social Security Numbers, Personal Info
Some seek to justify more Orwellian spying, ignoring that mass domestic spying iniatives utterly failed to stop prior attacks
Read More ...
Windows 10 Will Arrive in August, w/ New Devices in Sept.-Oct.
Leading the charge will be Lumia 940 XL, according to numerous leaks
Read More ...
BlackBerry's Redesigned Passport Leaks -- With the Exact Same Hardware Inside
Leak reveals a perplexing expenditure of effort on Canadian phonemaker's part
Read More ...
Editorial: A Note to Apple on Watch Complications -- "You Keep Using That Word..."
Complications are about precision, craftsmanship, and complexity -- the antithesis of how Apple is redefining the term in Watch OS
Read More ...
In-Depth: A (Semi-Complete) Guide to What's New in iOS 9
iOS brings true multi-tasking, new "snap" style controls, new APIs, a smarter Siri, mass transit mapping, a unified "Wallet", and more
Read More ...
Facebook Begins Mass Rollout of Free Bluetooth Business "Beacons"
Project is arguably Facebook's biggest dedicate hardware effort to date, aims to realize goal of massive localized data mining.
Read More ...
Samsung Quietly Rolls Out Slick Galaxy S6 Active, AT&T Gets Exclusivity in U.S.
Model shares much with the baseline release, but loses fingerprint scanner in lieu of hardware buttons
Read More ...
Hot Air? President Obama, G7 Pledge to Eliminate Most Fossil Fuel Use by 2100
U.S. is unlikely to reach that commitment given that its consumption of fossil fuels has increased by nearly a fifth in the last half decade
Read More ...
Available Tags:EVGA , NVIDIA , SSD , SATA , Nokia , Microsoft , Hack , Security , Windows , Hardware , Apple , iOS , Facebook , Samsung , Galaxy ,


































No comments:
Post a Comment