Thursday, March 26, 2009

forex Strategy Outlook: US Dollar Volatility Complicates Trading Outlook

Extreme US Dollar volatility leaves our forex trading signals at somewhat of a disadvantage, as sideways price action gives little indication on short-term trends. The Euro/US Dollar has seen some of the most extreme intraday price changes in history, but the currency remains stuck in a fairly wide range. Absent a break in either direction, we see few clues on what to expect next. Our Sentiment indicators signal that traders remain largely neutral on the US dollar, and ideally we would wait for clearer trading bias before committing to trend-based currency trades.

Forex Trading Automated Systems Outlook
DailyFX+ System Trading Signals – Momentum2, Breakout2, and Range1 trading strategies remain as our top performers in the past 60 days of trade. We are nonetheless mindful that that Momentum2 and Breakout2 trades may underperform through near-term on extremely choppy price action. Just recently Momentum2 and Breakout2 went short the EUR/USD on the heels of previous declines, but the almost-unbelievable intraday spike higher stopped out both systems at a sizeable loss. All the same, Momentum2 and Breakout2 trades remain attractive from a risk-reward perspective.

It will nonetheless be important to monitor US Dollar pairs through the near term and manage our trading biases accordingly. For the moment, we favor Momentum1 and Momentum2 trading signals. Yet this could easily change if we see signs of rangebound markets, and we will update our Forex Trading Strategy Outlook accordingly.

Definitions

Volatility Percentile – The higher the number, the more likely we are to see strong movements in price. This number tells us where current implied volatility levels stand in relation to the past 30 days of trading. We have found that implied volatilities tend to remain very high or very low for extended periods of time. As such, it is helpful to know where the current implied volatility level stands in relation to its medium-term range.

Trend – This indicator measures trend intensity by telling us where price stands in relation to its 30 trading-day range. A very low number tells us that price is currently at or near monthly lows, while a higher number tells us that we are near the highs. A value at or near 50 percent tells us that we are at the middle of the currency pair’s monthly range.

Range High – 90-day closing high.

Range Low – 90-day closing low.

Last – Current market price.

Strategy – Based on the above criteria, we assign the more likely profitable strategy for any given currency pair. A highly volatile currency pair (Volatility Percentile very high) suggests that we should look to use Breakout strategies. More moderate volatility levels and strong Trend values make Momentum trades more attractive, while the lowest Vol Percentile and Trend indicator figures make Range Trading the more attractive strategy.

The information contained herein is derived from sources we believe to be reliable, but of which we have not independently verified. FOREX CAPITAL MARKETS, L.L.C.® assumes no responsibility for errors, inaccuracies or omissions in these materials, nor shall it be liable for damages arising out of any person’s reliance upon this information. FOREX CAPITAL MARKETS, L.L.C.® does not warrant the accuracy or completeness of the information, text, graphics, links or other items contained within these materials. FOREX CAPITAL MARKETS, L.L.C.® shall not be liable for any special, indirect, incidental, or consequential damages, including without limitation losses, lost revenues, or lost profits that may result from these materials. Opinions and estimates constitute our judgment and are subject to change without notice. Past performance is not indicative of future results.

India Inc may get 2-year relief over forex losses

The National Advisory Committee on Accounting Standards (Nacas),

which is the final word on accounting policies followed by the
Indian industry, has
favoured suspending for two years a key rule that requires firms to mark-to-market foreign exchange assets and liabilities, a decision which comes as a victory for corporate India, as it sits down to draw yearly financial results.

The demand to suspend this rule, known in accounting circles as AS-11 , was made by the Confederation of Indian Industry (CII) on grounds that it could severely distort the earnings of many companies. It was contended that this accounting standard, designed to address normal conditions, should be suspended for the time being, as the present market conditions were not normal.

India Inc may post better results if Nacas’ recommendations are accepted, as it would spare several companies from taking a hit to reflect the 27% depreciation of the rupee against the dollar in the past one year. Higher profits would mean higher tax collections for the government.

A similar debate is now raging in the US on whether the capital market regulator, Securities and Exchange Commission, should suspend mark-to-market accounting rule that has forced banks to report billions of dollars in asset writedowns . Nacas’ recommendations are usually accepted by the government. Nacas chairman YH Malegam declined to comment on whether the body, which was constituted by the ministry of corporate affairs, had asked for the suspension of AS-11 until April 2011.

The ministry of corporate affairs , which gives statutory force to Nacas’ suggestions through notifications , also declined to comment . Nacas consists of representatives from the ministry of corporate affairs, the Reserve Bank of India (RBI), Comptroller and Auditor General of India (CAG) and various chambers of commerce.

The decision to hold off implementing AS-11 , which would have forced companies to mandatorily account their foreign exchange losses, was taken at a Nacas meeting held in Mumbai on Tuesday.

FOREX-Dollar off lows hit on Geithner remarks, yen slips

OKYO, March 26 (Reuters) - The dollar rose against the yen on Thursday, rebounding from lows hit after U.S. Treasury Secretary Timothy Geithner said he was open to expanding the use of the International Monetary Fund's special drawing rights.

Investors initially interpreted Geithner's remarks on Wednesday as an endorsement of China's proposal this week to eventually replace the dollar as the world's reserve currency with the IMF's SDRs. [ID:nPEK184558]

Geithner's comments pushed the dollar lower against the euro and the yen on Wednesday, although it regained ground after he said the dollar would keep its status as the top reserve currency for a long time. [ID:nN25425979]

Geithner was probably commenting on China's call for expanding use of the IMF's SDRs rather than about the notion that the SDR may eventually replace the dollar as the world's reserve currency, said Masafumi Yamamoto, head of foreign exchange strategy Japan at Royal Bank of Scotland.

"The remarks probably were not made from the standpoint of foreign exchange policy," Yamamoto said, adding that the Treasury secretary probably did not mean to affect moves in the dollar with his comments.

"It is hard for the United States to seek a stronger dollar, but at the same time, if they call for a weaker dollar that could make it hard for them to finance their (current account) deficit, as investors may shy away from buying Treasuries," Yamamoto said, adding that a higher dollar would hurt the trade competitiveness of U.S. goods.

The dollar rose 0.3 percent from late U.S. trading on Wednesday to 97.80 yen . That was up from Wednesday's low of 96.90 yen and last week's one-month low of 93.55 yen hit on trading platform EBS.

The euro dipped 0.2 percent to $1.3552 , having pulled back from Wednesday's high of $1.3653 and last week 2-½ month high of $1.3739.

Against the yen, the euro was steady at 132.53 yen , having pulled back from a five-month high of 134.50 yen hit earlier this week.

Market players said the yen was pressured by selling by a Japanese brokerage. A trader for a European bank said the reasons behind the flows were unclear, although they might be related to overseas investment by Japanese investors.

With the end of Japan's fiscal year coming up next week, this is a time when special seasonal flows can appear, he added.

shares came after U.S. stocks rose on Wednesday as unexpectedly strong housing and durable goods data fuelled hopes the economy is finally on the mend.

The New Zealand dollar rose 0.8 percent to 55.66 yen and the Australian dollar was 0.4 percent higher at 68.38 yen. The Australian dollar hit a 4-½ month high of 69.60 yen earlier this week.

The New Zealand dollar gained some support after the country's fourth-quarter current account deficit was in line with market expectations, allaying fears of a large blowout of the deficit that could hurt New Zealand's currency and credit rating. [ID:nWEL487349]

Earlier, the Reserve Bank of New Zealand denied a market rumour that an emergency meeting was being held to discuss a spike in five-year yields, which have climbed around 80 basis points in the past two days. [ID:nWLF001287] (Editing by Kazunori Takada)

SEBI doubles forex future limit

In a bid to encourage non-banking trading brokers and small traders, the market regulator Securities and Exchange Board of India (SEBI) Tuesday doubled the position limits - predetermined level set by regulatory bodies for a specific contract or option - in exchange-traded currency derivatives.

According to the market watchdog, the gross outstanding limit for non-banking brokers has now extended to $50-million from earlier $25-million, while for small traders the prescribed margin has been extended to $10-million as against $5-million previously.

For the banks, SEBI has made no changes and they can trade up to their earlier prescribed limit of $100-million. However, banks are not active in this sort of forex trading as they have more efficient option of accessing over-the-counter (OTC) forex market.

This move by the regulator has come in the wake of market participants’ repeated plea to enhance the existing position limits, as the small limits were inadequate to effectively hedge their foreign currency exposure risks.

Now, after doubling the exposure, the volume of trading would further increase because of higher exposure and it would also encourage market participants.

However, SEBI has also clarified that the position limits would be specific to an exchange and not to the exchange-traded currency derivatives market as a whole. Presently, the National Stock Exchange (NSE), Bombay Stock Exchange (BSE) and the Multi-Commodity Exchange (MCX) are the three entities that offer currency derivatives.

Forex: IT cos rejig hedging plan

As the fiscal fourth quarter comes to a close amidst high currency volatility, Indian IT

companies hope their hedging strategies could like
ly help them protect revenues and
profitability. While some of them have maintained strategies, others have adopted different routes to beat the fluctuation in the forex market.

Infosys, the country’s second biggest IT exporter, is playing it safe by taking a short term view on the dollar-rupee variation, while top exporter TCS and the third biggest Wipro have chosen to hedge their receivables for more than a year.

Infosys has decided to consider a horizon of not more than two quarters as the company expects the rupee would only depreciate over the short term. “Rising fiscal deficit, falling economic growth, widening trade deficit and political uncertainty in view of the elections are likely to keep the rupee weaker,” says Infosys CFO V Balakrishnan.

KN Dey, a director with foreign exchange brokerage Basix Forex, agrees. Although the rupee is likely to strengthen following the latest US bailout announcement, it might close at over 52, by March 2008 end, he said. “The rise in the rupee is more a temporary phenomenon with traders moving their idle funds from currency markets to the local money markets (short-term debt market) because of higher yields,” he added.
Infosys has hedged over $530 million in rolling contracts. The company deals only in range forward options. Under such an arrangement, exposure of an exporter is limited to the range of the option. For instance, if the range is 1.3-1.6 for a dollar-euro contract, then the exporter will get the spot exchange rate as long as it falls in the range. If on the settlement day, the spot rate falls below 1.3, the exporter gets the lower rate of 1.3 and if it moves above 1.6, he gets a higher rate.

Wipro has continued to take longer-term bets in the forex market. “We normally cover 50-100% of net inflows (foreign currency revenue sans foreign currency expenses) for the next four quarters,” said Wipro corporate treasurer Rajendra Kumar Shreemal. Currently, Wipro has $1.8 billion in currency hedges spread over four years until FY 13.

Mid-tier IT company MindTree, which nearly wiped out its net profit in the December 2008 quarter, on account of forex losses, has decided to focus only on plain forward and option contracts. Company CFO Rostow Ravanan says, “Basic hedging philosophy has not changed for us. But, we now have only plain vanilla contracts and have hedged 50% of our recievables.” In the earlier quarters, the company had taken leveraged option contracts, which were riskier in nature.

WNS, the second-largest BPO firm in the country, has changed its hedging policy since the last two quarters from 18 to 24 months due to higher forex fluctuations. “Every month, we reassess our position for next 24 months. We have hedged over 90-95% of our future revenue,” says Alok Misra, CFO of WNS. He adds that the procedure helps to smoothen out the sharp volatility in currency market. The company’s hedging position stands at just over $500 million, which completely covers FY 10 revenue and partially for FY 11.

WORLD FOREX:Euro Recovers Tad Vs Yen,Dlr, But Upside Limited

The euro rose against the yen and dollar in Asia Wednesday as players bought back the common currency amid a continued bullish outlook for the European unit after it retreated from recent highs overnight.

Some players were also buying the euro, which tends to trade up on higher stocks, as Japan's benchmark Nikkei 225 Stock Average retraced earlier losses and briefly crossed into positive territory in the afternoon, dealers said.

At 0450 GMT, the euro stood at Y131.80 compared to Y131.60 late Tuesday in New York. Against the greenback, the common currency was at $1.3443.

"The key for the euro in the near-term is how stock prices will fare," said Osao Iizuka, head of foreign exchange trading at Sumitomo Trust & Banking.

If share prices picked up, this would lead to further gains for the euro, particularly against the yen, Iizuka said. "The euro has climbed about 10 yen over the past couple weeks," as investors' willingness to buy the higher yielding but riskier unit has rebounded with recovering share prices, he added.

But dealers said growing speculation that the European Central Bank may cut interest rates by as much as 50 basis points at its meeting next week would keep players from pushing it up much higher.

Against the dollar, the euro may hover around $1.3450, as "the ECB concerns limit its topside, but there are still no reasons to actively buy the dollar," said Sumitomo Trust & Banking's Iizuka.

For the rest of the week, the common currency may trade in a Y131-Y133 band against the yen, said Yuji Saito, vice president of foreign exchange at Societe Generale.

Meanwhile, the dollar was essentially unchanged against the yen, standing at Y97.80 at 0450 GMT compared with Y97.88 late Tuesday in New York.

While the greenback dipped slightly against the yen after Japanese trade data released in the morning showed Japan logged its first trade surplus in five months, the record drop in exports tempered any short-term players' enthusiasm for buying the yen, dealers said.

The yen could benefit from the trade data in the near term because they may assuage concerns that Japan could develop a chronic current account deficit, after logging its first such deficit in 13 years in January, said Masayuki Kichikawa, chief economist for Merrill Lynch. "Some people had been thinking that Japan might have a current account deficit this year, but the data should reduce those expectations," he said.

Fall in forex reserves mainly on revaluation

Nearly two-thirds of the $61-billion decline in India’s foreign exchange reserves was on account of valuation losses, indicating that the Reserve Bank of India (RBI) had now lowered its market intervention to check volatility of the rupee against overseas currencies.

According to market sources, the central bank has largely been absent from the foreign exchange market in recent weeks despite the rupee touching an all-time low of 52.18 against the US dollar on March 3. Foreign exchange dealers said that the RBI did not want to intervene heavily as it wanted to avoid sucking out rupee resources by selling dollars.

o far in 2009, the rupee has dropped by 3.46 per cent against the dollar as foreign institutional investors (FIIs) have continued to sell their investments in India, partly on account of the demands in their home markets.

According to the latest data released last Friday, the country’s foreign exchange reserves were estimated at $248.72 billion as on March 13, as against $309.71 billion at the end of March 2008. As on February 27 2009, the foreign exchange reserves were estimated at $249.28 billion.

A bulk of the decrease in the reserves has been on account of lower foreign currency assets. At the end of February, foreign currency assets had dropped by $60.52 billion, which also indicates that the fall was on account of revaluation.

In recent weeks, the dollar had gained against most currencies globally, till the US Federal Reserve’s announcement to buy long-dated debt. On Thursday, following the proposed move, the US currency fell the most in 25 years.

Thursday, March 12, 2009

Intel Core 2 Duo Mobile Processor Review - T7600

A month ago we showed you exclusive testing results (here) of the new Intel Core 2 Duo T7400 “Merom� CPU . While those initial results showed good improvements in floating point operation, a quick test revealed that battery life (here) showed that our testing platform was still not ready for prime time as the T7400 should have battery performance by design.The Core 2 Duo T7600 we are looking at today is a production sample (read: very likely the same quality and performance you will get at a retailer) clocked at 2.33 GHz, slightly faster than the 2.16 GHz on the T7400 sample we tested a month ago. And what a difference a month makes! Intel has made the improvements where it counts -- lower power consumption which translates into better battery life!Intel has done it! While in our early look at the T7400 showed rather poor battery performance the T7600 we have tested today, which is heading into production, has made drastic improvements. We can now say without a doubt that Intel's latest mobile CPU has nailed the holy grail in mobile computing - it performs faster, consumes less power, and generates less heat. What else is there to say besides that?Now with power and heat issues all sorted out, there's no reason why you shouldn't consider the Core 2 Duo in your next laptop. With price points as low as $209 for the T5500 up to the T7600 at $637, there’s a Core 2 Duo mobile CPU to suit all budgets and designs. It really looks like Intel has another hit CPU on their hands and with all the design innovations from laptop vendors, it's hard not to be a little excited when looking forward. It's definitely a good time to be looking into a notebook computer and Intel has given us many reasons to with their Core 2 Duo CPUs! So in the end, we're giving this CPU an Editor's Choice Award. The latest Core 2 Duo mobile CPU is cooler, faster, and runs longer than the older Core Duo. Not only that, it has technology improvements under the hood like a larger level 2 cache (4MB) and 64-bit extensions to support 64-bit OSes like the upcoming Windows Vista. If you've had reservations before about getting a laptop, the Core 2 Duo should have you convinced. Intel's track record in this arena is strong, and their latest CPU just solidifies their lead.

Managing Data Storage by Support Services

Data Storage exponential data growth, you need to find efficient, cost-effective ways to make the most of your critical storage infrastructure. And with the complexity of today's storage technology and the ever-mounting demands on your internal resources, it's tougher than ever to do it alone. With the technology storage industry at the cusp of strong growth, as never before, data storage environment poses multiple challenges that directly impacts bottom-line business results.More data is captured and stored by businesses now than ever before. A typical business today stores 10 times more data than in 2000. Gartner report estimates that storage requirements will have increased by a factor of 30 by 2012. Challenges like soaring storage costs. Tighter regulations governing data retention, access and privacy. Data center power, cooling and space limitations. Scarce technical expertise. The constant threat of natural and manmade disasters. The complexities of managing multivendor storage solutions are present. This can be addressed by end-to-end support services, Multivendor hardware/software support, Integrated remote support technologies, Comprehensive installation and implementation support lastly a full suite of flexible, scalable services to boost ongoing storage performance and availability. Companies are moving from Direct Attached Storage (DAS) to networked storage with the adoption of Fiber Channel (FC) Storage Area Network (SAN) technologies. The major benefits associated with this move are: higher availability, scalability, minimal interference with LAN traffic, increased management efficiency and utilization levels of about 90%, resulting in lower Total Cost of Ownership (TCO) and higher Return on Investment (ROI). As the economy improved from the recession of 2000, and the demand for storing increasingly large volumes of data, companies are beginning to spend on IT infrastructure and storage. In this way, businesses need to start rethinking the way they go about storing their data in order to enhance analytics, improve business processes and give themselves the best possible competitive advantage.

AMD Intros GPU

AMD has introduced the ATI Radeon E2400, a high-performance graphic processing unit to deliver the latest 2D, 3D and multimedia graphics performance.The new graphics technology is backed by a planned five-year availability and long-term support offering reliability for a variety of applications on operating systems featuring Microsoft DirectX 10 and OpenGL 2.0. "With the input of major original equipment manufacturers and platform developers, we have designed the ATI Radeon E2400 from the start to deliver high graphics performance while meeting the unique requirements of the embedded market," said Richard Jaenicke, director of embedded graphics for AMD. Built on 65nm process technology, the ATI Radeon E2400 includes AMD's Unified Shader Architecture with support for Microsoft DirectX 10, allowing customers to develop advanced content for many applications. The device package incorporates 128MB of on-chip GDDR3 memory for graphic-intensive applications, eliminating the space, effort, and cost of external memory designs. For designs that require a low profile solution in space-constrained environments, AMD offers the ATI Radeon E2400 MXM-II module based on the open standard MXM-II specifications. The ATI Radeon E2400 is scheduled to ship this month in production quantities. AMD will showcase the product both at Embedded World 2008 (February 26-28, 2008) in Nuremberg, Germany, and at Embedded Systems Conference Silicon (April 14-18, 2008) in San Jose, California.

Intel Core 2 Duo E6750 Preview

The entire Conroe line-up is built on a 65nm process, with the mainstream products offering 4MB of L2 cache. Improved over the previous Pentium 4/Pentium D line-up was better power efficiency resulting in a lower TDP and better overall temperatures. This is appreciated, as two cores under the same IHS can potentially create an unwanted room heater.
All but the lowest end Core 2 Duos take advantage of a 1066FSB. This is where this refreshed line-up comes into play, as it ushers in 1333FSB computing. This noticeable speed bump is all done while retaining the same TDP.
All Conroe 1333FSB processors are identified by by a 50 at the end of the product name, hence E6750, which is effectively taking over the spot of the E6700. Nothing has changed except for the FSB and speeds, except the ratio of course, which had to be altered in order to compliment the upgraded frequency.
One thing that should be cleared up is that most overclocking enthusiasts have already accomplished the same speeds we are seeing today, with most being exceeded. In fact, there is nothing stopping anyone from popping in an E6600 and overclocking using a 333FSB and 8 multiplier. That would effectively give you the exact same speed as the E6750 we are taking a look at today.
You might be wondering where the benefit is, with this official speed bump. Primarily it will benefit those non-overclockers most. There is no comparison to equal processor speed at 1066FSB and 1333FSB. That added FSB frequency should make a much more noticeable performance difference than the CPU frequency boost itself.

Intel Core 2 Extreme QX6700 (Kentsfield Quad Core)

Core 2 Duo has been one of the most important launches for Intel in quite some time, really taking back the Desktop market by storm. Yet, even when I was in Germany at a pre-launch briefing of Conroe/Core 2 Duo, Intel suggested that quad core wasn't far off either. In fact, the computer being used for the PowerPoint presentation, was in fact Kentsfield – Intel's code name for its quad core processor. Not particuarly good use of resources, but an excellent demonstration of the state of play.November has come around, and true to Intel's word, quad-core is here. It seems like only yesterday we were marvelling at the first dual-core solutions, so to have a “quad-core” processor in front of me, seems almost surreal. However, in actuality, this isn't as much of a technological feat as you might think. Eighty cores, as demonstrated at IDFIntel has basically taken two Core 2 Duo dies and just put them in to one package. I think Intel realises that this is cheating a little and that's why the product name is Core 2 Extreme QX6700, which apart from the subtle “Q”, doesn't mention quad anywhere in the name. This is an Extreme Edition processor, so is naturally expensive, initially priced at $999. This isn't far off the current price of an Core 2 Extreme X6800 (£643), so in comparison, it's pretty good value.Technically speaking, the fact the cores are in the same package is irrelevant. In order for data to be communicated between the two dies, the data needs to go through the North Bridge, via the Front Side Bus. Essentially, it means the performance will be identical to having two separate processors in two separate sockets.Intel's approach does have its benefits though. For one, by having all four cores in the same package, there is only one heatsink. Any boards that currently support Core 2 Duo, will support Kentsfield as well. In saying that, we had to update the BIOS on our Gigabyte 965P motherboard, in order to get it to boot. It also makes designing a decent motherboard a lot easier and means we can expect to see quad-core hitting the MicroATX platform

Intel Core 2 Duo ‘Merom’ Notebooks

Every few months computer technology moves forward. Usually it’s only a small jump, such as a latest iteration of a graphics architecture, but sometimes it’s a significant one, such as the recent introduction of Intel’s Core 2 Duo desktop processor, known internally by Intel as Conroe.Conroe’s arrival was very important as it represented the first time that Intel had brought the fruits of its new ‘Performance per Watt’ architecture direction to the desktop. Intel has been moving in this direction for some time, ever since it realised that even as its ‘NetBurst’ Pentium 4 architecture was running out of steam, its Pentium M ‘Banias’ mobile chip was going great guns.As such it turned to the Banias design team, based in Haifa, Israel, to create an architecture that was efficient and able to scale, qualities that Pentium 4 did not possess. Last year, I was lucky enough to be taken on a press tour of Intel in Israel, and met some of the team responsible for Banias, Dothan and Yonah. It was clear then that all of these were leading up to the processor released today, known then only as Merom. Though it was the last to appear on the market, Merom is actually the processor on which its desktop and workstation counterparts, Conroe (Core 2 Duo) and Woodcrest (Xeon) are based.This design architecture, which Spode talked about here is known as the Core architecture. Rather confusingly though, Core Duo, which is Yonah, is not actually Core architecture – it’s was essentially a dual-core version of Pentium M.Core architecture, with its various improvements and enhancements, actually begins with the Core 2 Duo, which in Conroe guise, has already appeared on the desktop.The reason for this is that Intel previous mobile chip, Yonah or Core Duo was so good that it didn’t need to rush it to market. However, Intel definitely needed to bring Conroe to the market as for a long time been lagging behind AMD.So how does the mobile version of Core 2 Duo (Merom) actually differ from the desktop version (Conroe)? Actually, the differences are relatively minor – though as it’s essentially the same chip that’s not really surprising. This means that it sports all the excellent features that made Conroe so powerful. This includes the Wide Dynamic Execution consisting of an increase in pipelines from three to four and the use of the Macro-Fusion technique that combines common pairs of instructions into a single instruction. Perhaps most crucially Merom employs all of the power management saving tricks that the Core architecture is designed for, such as putting many parts of the CPU to sleep when they’re not required. This enables it to have a lower Thermal Design Power (TDP) figure of 34W, compared to 65W for Conroe, which is the essential figure for a mobile CPU. Other differences are that Merom runs at a lower Front Side Bus of 667MHz, (versus 1,066MHz).

ntel Core 2 Extreme QX6700 - Quad-Core Power for Desktops

ntel's Core 2 Duo processor family bearing the new Core microarchitecture broke new grounds when it was launched a scant four months ago, catapulting Intel back into the driver's seat of the microprocessor industry, a 'show hand' that arch-rival AMD has yet to deliver a response till date. Despite the rave journalistic buzz however, the Core 2 Duo is still a dual-core processor and dual-core processors themselves aren't anything new (Intel's Pentium D and AMD's Athlon 64 X2 have been around since early 2005), not to mention that three and a half months is hardly enough time for the Core 2 Duo to really penetrate the retail channels.
The news that have been most anticipated within tech circles however, has been the talk of Intel's upcoming quad-core part, codenamed Kentsfield. During the recent IDF Fall 2006, Intel confirmed the launch and we were even given the opportunity for a hands-on performance preview, which you can check out here . Today, Kentsfield becomes official. Quad-core processing has indeed arrived in the consumer space as Intel increases its leadership position even more.
The official name of the Kentsfield series will be Core 2 Quad in the mainstream segment and the Core 2 Extreme in the enthusiast segment. The first Kentsfield processor to be available at launch will start with the top-end 2.66GHz Core 2 Extreme QX6700 priced at US$999, which is the same as the 2.93GHz Core 2 Extreme X6800 during its launch. The QX6700 will be followed by the mainstream 2.4GHz Core 2 Quad Q6600, tentatively set to be released first quarter of 2007 and rumored to be priced around US$851. Whether the corresponding Core 2 Duo processors will receive price cuts remain to be seen as nothing has been announced yet.
This naming convention is based on the fact that the Kentsfield processors are in the same generation as the dual-core Conroe and Allendale - hence, 'Core 2' designates the processor series and the 'Duo' or 'Quad' suffix designating the number of cores. What may be initially confusing however is that both Conroe and Kentsfield enthusiast parts will be named Core 2 Extreme. For these processors, the CPU model numbers give away its pedigree. Those with a 'Q' prefix are quad-core models, eg. Core 2 Extreme QX6700

IBM Notebook: IBM thinking smart on stimulus money

As Washington policymakers brainstorm on how to invest billions of dollars to stimulate America's economy, IBM has launched a "Smart Planet" ad campaign.
The sleek productions feature Big Blue employees touting smart traffic, smart food, smart health care, smart energy, smart water management.
"Think smart," urges IBM, which last week also introduced a new consulting service to help governments.
Since Day One, IBM CEO Sam Palmisano has been urging the president to invest in "smart infrastructure" projects, from utility grids to food distribution systems. At the request of Obama's transition team, IBM produced a study showing those smart projects could put almost a million Americans to work.
That study was conducted by the Information Technology and Innovation Foundation, a self-described non-partisan think tank that has Christopher Caine, IBM vice president of governmental programs, serving on its board of directors.
The ITIF, which is funded by tech giants like IBM and the big phone and cable companies, lobbied Congress for tax credits on investments in broadband and other technologies, including smart-grid — brimming with Big Blue expertise.
The broadband tax breaks were cut from the stimulus bill, but the smart-grid incentives remained.
That means if IBM is awarded these projects, it will not only get stimulus money, but tax breaks on the stimulus money.
For Big Blue, that's smart thinking.
Christine Young covers IBM. She can be reached at 346-3140 or cyoung@th-record.com. IBM Notebook appears Mondays.

NetApp dumps Filerview for new model

NetApp's FilerView management product is going to be replaced by NetApp System Manager, a Windows application built as a Microsoft Management Console (MMC) 3.0 snap-in, and available in a few months.
The concept behind it is to provide management of one or many NetApp arrays through a simple and easy to use GUI. FilerView is HTTP-based and has been around in NetApp for a long time. According to NetApp company bloggers, customers deserve a more modern interface.
The new NSM product will be included with all FAS arrays, starting with FAS2000 and 3000s, and has a Windows Server 2008 look and feel. It supports "discovery, setup, FCP, iSCSI, CIFS, NFS, deduplication, provisioning, thin provisioning, snapshot and configuration management of multiple NetApp storage systems from a single pane of glass".
It's reckoned that a FAS array can be set up for the first time in around five minutes, with initial system configuration only needing two screen window pages. Subsequently FAS arrays can be set up in two minutes. It's said that there is auto-discovery for Active Directory, LDAP, DNS and DHCP. Add a host name to NSM and "you're up and running with all licensed protocols".
NSM is integrated with the Windows System Tray and will alert a sysadmin to monitored NetApp system problems.
NetApp best practise settings for its arrays come with the product. Sysadmins will be able to use wizards to create LUNs, volumes and aggregates, with only three clicks needed for aggregate creation. We're told that "all disks in the aggregate are balanced across multiple back-end loops".
There is a specific wizard to support provisioning for VMware systems. When creating an NFS datastore in a VMware environment, "the process is similar to creating a volume, however, the wizard will also ask for the VMkernel IP address of the ESX host(s) so the NFS volume gets exported. The dedup option is also there."
IBM N-Series system users will get NSM too, and beta NSM builds will soon be available on the NetApp online Beta Community.

NetApp gives MetroCluster a good going-over

NetApp has updated its MetroCluster software to support storage array failover to remote site storage in VMware environments.
MetroCluster is software that, using replication, synchronously mirrors data writes from a primary NetApp FAS array to one in a remote site, across a campus or metro area, up to 100km away. It runs in the NetApp array controller and will failover user access to the remote site if a component in the array, below the controller, fails. The NetApp arrays in the primary and secondary sites form a high-availability cluster and are seen as a single resource by accessing servers.
NetApp says that, as a result of integration testing with VMware, the MetroCluster product now offers continuous data availability in VMware environments with failover accomplished in seconds. No update to NetApp's Data ONTAP operating system is involved.
It also facilitates transparent (to the users) software and hardware upgrades of a NetApp storage array with user data accesses re-directed to the remote site during the upgrade process and then returned to the primary array. Additionally the MetroCluster software has been integrated with NetApp's ASIS deduplication to reduce the amount of stored data.
The software works with VMware's ESX server and any storage array failover is transparent to ESX except in the situation where the array controller itself fails. In that case it cannot carry out its failover duties which have to be performed manually.
NetApp's SnapMirror software can be used to extend data availability across distances longer than 100km by using asynchronous mirroring.

What's next for NetApp hardware?

NetApp has confirmed it is working on new hardware platforms.
Rich Clifton, a senior VP at NetApp, provided a glimpse of NetApp's hardware direction this week at VMworld, as The Register sought to determine what storage OEMS would want with a PCIe switch.
Clifton said there would be phased ONTAP G announcements, suggesting that functionality could be delivered in stages. ONTAP 8 delivers clustering to the NetApp world and is the merger of the existing 7G and GX ONTAP variants, ONTAP being the operating system for the bulk of NetApp's storage products.
The senior VP said he knew nothing about PCIe use by NetApp but did say that any interconnect is chosen first on three parameters: bandwidth, latency, and cost. He said that current FAS 6000 clustered pair products use an InfiniBand point-to-point link between the two storage processors. In fact, NetApp is a significant shipper of InfiniBand links because of this.
Virtensys, another operation we spoke to at VMWorld, makes a VMX-5000 switch that connects to X86/PCIe servers by external PCIe cables. This lets them share network adapters (such as FC, IB, and Ethernet) as well as storage mounted in the switch. The attached servers each see virtual adapters and virtual direct-attached storage and think it's all local.
Marek Piekarski, chief technical officer for VirtenSys, says it's talking to potential server and storage OEMs. It's understood one server OEM has provisionally signed up. After NetApp was introduced into the conversation, he said that we could think of a scheme of storage processors in a matrix connected by PCIe. He didn't actually say they were talking to NetApp though, so what follows is supposition.
There's no need for a VirtensSys switch product in this environment unless the storage processors connect to the switch and share the I/O cards in it and/or use the switch to simplify a mess of point-to-point links, meaning more than two storage processors, Perhaps much more than two, heading towards five or more.
Piekarski said they were working on a second generation of the switch technology to add inter-processor communications capability so it would have InfiniBand-like low latency.
This nets out to a possible high-end NetApp FAS-bigger-than-6000 box which has a cluster of five or more storage processors linked by a low-latency PCIe cloud. All would talk to the outside world via a VirtenSys gen 2 switch providing the IPC capability, and there would be shared adapter use so that the box could talk FC, FCoE, iSCSI, or NAS to connected servers.
Each storage processor would, logically, have its own storage enclosures with, potentially, fast disk, slow disk, and perhaps solid state drives in them. Some storage processors might, though, be doing storage management things and not actually manage their own local storage.How about that for an ONTAP 8 box?
If NetApp were to vary the HW specs of the storage processors, it has an obvious way to have faster/slower versions.
Getting back to VMworld, such a new box would have VMware vStorage API interface code added to it, so that it could play well in VMware's shiny new vSphere data centre environment.
The net net of this mental Lego construction exercise is that a FAS 7000 or 8000 with clustered storage processors could be announced in the 2009/2010 timeframe. It would be modular storage but go way beyond the classic dual-controller arrays we have gotten used to and take NetApp up into the Symmetrix area and/or into the Data Direct/Isilon product area of very fast clustered storage.
Take this with a pinch of salt though. VirtenSys could be talking to other storage OEMs instead of or as well as NetApp.

Sectors - The feud continues: NetApp slams EMC's dedupe plans

The brawling has moved the Ruptured Monkey blog to assign wrestler-stye names for some of the loudest bloggers, as you can read here.
If you are tired of the feuding and fighting and want some actual information about EMC, check out this post about RecoverPoint objectives from virtualtacit.
Another post comes from a blogger we have wanted to feature for a while: the Lone Sysadmin. We imagine this blogger high in the saddle, rounding up lost servers out on the prairie. Or maybe not: this post sees him explore the murky world of file deletion.
If you feel that HP is the new IBM – colossal, competent, charmless – its storage blogs won't change your mind. Check out this mild effort on storage virtualisation.
We wanted to finish this week with some HDS blogs, but something is badly borked with its site: posts that appear in our RSS reader die once we hit Hitachi's site. Sigh .... we need to point to a press release instead to tell you about the executive reshuffle there and straight to the IDC document about replicated storage being a bigger problem than new unstructured data that is the subject of a Hu Yoshida post. Actually, we may not need to bother with that one: the analysts told us about this one a couple of years back.

Apple beats Intel to Nehalem-EP chip launch

Ponder this: Is an Intel product launch still a launch, if the product debuts very publicly in an Apple computer?
I won't presume to answer that question. But the fact is that Intel will launch Nehalem-EP server processors later this month, despite their manifestation Tuesday in the new Mac Pro under their official model names: the Xeon 3500 and 5500.
The chips--in their desktop variant known as the Core i7--are being offered in eight-core or four-core configurations and, like all Nehalam-architecture processors, come with an integrated memory controller for (theoretically) better performance. (Intel's Core architecture does not integrate the memory controller.)
Other Nehalem-architecture features include: Hyper-Threading for, according to Apple, "up to 16 virtual cores" (which improves multitasking), and Turbo Boost Technology, which dynamically increases the processor's frequency, as needed.
The Mac Pro also offers high-end Nvidia and ATI graphics. Systems can be configured with either Nvidia GeForce GT 120 or ATI Radeon HD 4870 graphics chips.

Intel’s domination continues

Intel, the 2007 channel champion in the processor category, widened its lead over AMD to emerge a winner.

Intel revamped its channel strategy in the second quarter by introducing a second tier to distribute its products. The vendor also launched Atom processors, creating a new market of computing devices—netbooks and nettops. In contrast, AMD had another dismal year. Within the first five months of 2008, AMD’s Director of Channels and Managing Director put in their papers. Meanwhile, AMD’s Phenom processors and Opteron processors (code named Barcelona) received a luke-warm response from both partners and customers.

There was just one silver lining for AMD: there are still some very staunch partners who swear by its products and technologies.

Price-performancePrice, performance and features are important criteria for processors. Both Intel and AMD raised the stakes with several new platform launches in 2009 with the former launching two new platforms, Nehalem and Atom, and the later launching its ambitious Shanghai platform to take on Nehalem. Intel positioned its Pentium Dual Core and Core2Duo processors in the mainstream PC market, maintaining a price range of Rs 2,600-Rs 7,000 throughout the year. AMD offered its Athlon 64, Athlon X2 and Phenom X3 processors at prices approximately 10-30 percent lower than comparable Intel chips, system makers feel that the final difference on the assembled product was less than Rs 1,000, a premium most customers were willing to cough up. AMD was successfully positioning its low-cost, low-energy Semperon processors for entry-level markets.By Q3 2008, Intel started offering Atom processors bundled with motherboards at prices less than a typical Semperon CPU-motherboard combo. In the high-end PC market, Intel Quad Core processors more than matched AMD’s Phenom X4 and even low-end Opterons.

In the server class of processors, system builders felt that Xeons scored over Opterons. Though AMD seems to have bridged the gap in terms of perceived price-performance by launching Phenom II and the third generation of Opterons (Shanghai), system makers feel that Intel’s new Core i7 (Nehalem) will take technology leadership away again.