Information Tiering or Storage Tiering?

Storage tiering is fast becoming a must-have feature in storage vendors today. 3Par (recently acquired by Dell) has its Dynamic Optimization and Adaptive Optimization for years. Compellent, too, has been touting their Data Progression technology for years as well. Both supports volume-based and sub-volume-based data movements between different disk profiles within the storage array.
Of the past 1 year or so, the big boys have realized that there is something going here and they have decided to jump in while the water's hot. EMC has FAST and FAST2 while IBM recently announced Easy Tier.
For the uninformed, storage tiering rely on different disk profiles within a storage array, where the mixture of speed and $/GB are the 2 key features that are used to determine where a data block will go. The fastest disk profile will hold the data blocks that are most frequently accessed while the least frequently access data blocks are moved to the slowest disk profile. Usually SSDs, Fibre Channel disks and SATA disks are provisioned to created 3 tiers of storage. Data access statistics gatherers will work with the storage policy engine to determine the dynamic movement of data blocks among these 3 storage tiers.
All these technologies basically rely on the statistics collected within the storage array disks during data block access and these statistics are likely based on simplistic mechanism of LILO (Last-In-Last-Out) concept, where the most frequently accessed data blocks are the mostly the blocks that will remain in the highest disk profile, for example SSD (Solid State Drives) disks. [NOTE: I am open for these companies to give me a lesson in their storage tiering technologies)
While this is fine and dandy, it is probably not the most intelligent way to accessing data. For example, data is being snapshot or backed up can be considered frequently accessed and therefore, could likely fall into the highest disk profile. Then this causes a situation that really does not match the value of the data or information with the disk profile.
If you take a step back to look at the entire design, what we are seeing is most storage arrays are actually gathering statistics about the "containers" of the data block. If this 4K block is frequently accessed, move it to SSD. Ultimately, there are internal high water marks and low water marks that determined the "intelligence" of the action of moving the data blocks from one tier to another tier.
In fact, if you look closer, the situation is pretty similar to the data deduplication technologies as well. Many solutions out there are probably deduping the container of the data blocks, not the actual content itself.
So, when exactly is Information Tiering becoming more relevant? To a SAN, the content in the LUNs has little meaning because the information is residing in the Filesystems that "owns" the LUN. For example, NTFS would reorganize and restructure the layout of the LUNs so that it can become a file cabinet to the files that it is about to store. Similarly, a database server is likely to "layout" its schema and information landscape on the LUN from the storage array. And I hope you can see a trend here. SAN has little intelligence of the files or information it stores. NAS probably has more visibility but that's another story.
I hate to be the spoiler here but it seems to me that we addressing this storage tiering thing from an angle where there is lesser knowledge of the content. Hence how well does most present storage tiering technologies match the needs of the business, where more intelligence of the content is probably most useful. Hmmm...