Back to Declic Consulting web site.. All posts by patricet28

Applied Microsoft SQL Server 2008 Reporting Services

by patricet28 2. April 2008 09:45

I just read on Teo's blog that he's planning on releasing a new book on RS2008.

If it's anything like his other book Applied Analysis Services 2005, which sits on my night stand, then it's gonna be mandatory reading for all Microsoft BI practitioners.

I really look forward to reading it!

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Creating native MDX business rules with PerformancePoint Server Planning

by patricet28 27. February 2008 11:22

PEL is the default language for creating business rules in PerformancePoint. PPS-P then translates PEL into T-SQL or MDX. You can view the generated code in the debug window.

However, for certain scenarios, using native MDX code can be much more efficient.

To use native MDX, follow these steps:

- Enable SQL/MDX in the PPS application (use the Admin console to change this setting).

- When creating your rule in the planning business modeler, select 'native MDX script' in the rule implementation field

- in the code window, type your MDX code as if you were in the calculation script of your Analysis Services cube

- set the rule Status field to 'inactive' (you can not deploy a native MDX rule with an 'active' status)

- save your model

- go to the PPS application database and search for the RuleSetsOrRules table

- in that table, set the IsActivated flag to 'True' for the rule you just created in PPS

- go back to the business modeler and deploy your rule. It should then write your rule to the cube calculation script (you can check this by generating the script for your PPS cube in Management Studio)

Using native MDX script is a bit more challenging to deploy, but the gain in performance is worth the effort!

Good luck!

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

TechDays 2008: Optimisation Performances Analysis Services 2005

by patricet28 18. February 2008 00:26

Many thanks to those of you who attended my presentation at TechDays 2008 in Paris, earlier this week.

As promised, here are the scripts that I used during my demos, as well as the Powerpoint slides themselves -- you'll need to read French to get anything useful out of them :-)

The Techdays.zip archive contains the following files:

Optimisation Performances AS2005.pptx: the Powerpoint presentation in French
Techdays-FE.xmla: the XMLA script to recreate my demo database
TechDays-FE.mdx: the script that contains the queries I ran against the techdays-FE cube
TechdaysQueries.sql: the SQL queries to determine the time spent in the Storage engine vs. the formula engine.

You'll need the AdventureWorksDW database (http://www.codeplex.com/MSFTDBProdSamples/Release/ProjectReleases.aspx?ReleaseId=8392) to be able to run the FE and SE queries I used in my demo.

Good luck and feel free to drop me a message if you have any questions!

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Data Generation with VSDBPro

by patricet28 3. January 2008 09:23

You've built your data warehouse, created your OLAP cube, written your ETL scripts to populate your relational database. Now come the famous questions: "How long will it take to process the cube?", "How much storage do I need to store the cube?", "what if I throw these additional aggregations to the mix?"

Of course, there is always the popular rule of thumb that gives you a 1.5-2M rows per minute processing time and a size of about 1/4th to 1/6th the size of the relational database (without indexes) for the OLAP store. But what if you need more precise metrics? There are two ways to get better results: either you get your relational store filled with real data (most accurate, but not always possible in the development phase) or you have a way to fabricate a high volume of fake data.

The remainder of this post explores a way to create a high volume of test data, which is pretty much what you need to a) have a tangible measurement of how long it takes to process the cube, and b) determine if you have adequate hardware resources to support your app (CPU, RAM and disk storage wise).

I've been using Visual Studio for Database Professionals (http://msdn2.microsoft.com/en-us/teamsystem/aa718807.aspx) and its data generation feature to create high volumes of test data. My goal was to populate 30+ SQL tables (used to create about 10 dimensions) and my fact table with 20M rows. My initials tests have been quite fustrating, as the process of generating data threw an Out-of-memory exception after roughly 3M rows. (Configuration: Dual-core T7200, 3GB RAM, Windows Server 2003 Enterprise Edition x64, SQL Server 2005 with SP2 and Cumulative update package 4). I've tried multiple times on various hardware platforms (some with 8GB RAM), but no matter what I did, I always ended up with this memory exception. Usage of memory would slowly increase over time, until it reached a point where it threw an exception and stopped the generation process.

AS2005 best practices teach us that a good design for a fact table should only have numeric aggregatable values and foreign keys to dimension tables. Indeed, that was exactly my situation.

But it looked like VSDBPro had issues with handling the foreign keys. Here are the steps I followed to get around this problem:

  • Install PowerTools for VSDBPro (http://blogs.msdn.com/gertd/archive/2007/08/07/it-is-august-6th.aspx)
  • In your SQL Server relational schema, drop relationships between your fact table and your dimension tables (I used a copy of my original database schema). Make sure that there are no foreign key constraints between the fact table and the dimension tables.  The screenshots below show an example with the AdventureWorks database that ships with the SQL Server samples.


  • In VSDBPro, import your new database schema
  • Create a data generation plan for your dimensions. I usually have two data generation plans: one for generating dimensions, the other one for generating data for the fact table.
  • Create another dgen plan for your fact table. For each dimension key ield, in the fact table column details, replace the foreign key generator with a sequential data bound generator.
  • Provide the SELECT statement that will retrieve keys from your dimension tables.



  • I then run the dimensions dgen plan, followed by the fact dgen plan. Notice that only the fact table is selected in the second plan. Had you used foreign keys, you'd have to generate both the dimensions tables and the fact table at the same time (VSDBPro can not create keys based on existing data. It has to generate both the primary and the foreign keys).


Back to my 20M rows fact table. On my machine, although memory usage slowly increased to 91%, it remained steady until it completed generation of my 20M rows. It took a couple of hours to generate that amount of data, but it was worth the wait! Now I can answer the questions raised at the beginning of this post with tangible numbers. I now know exactly the amount of disk space used for storage as well as the time it takes to process my cube and how much memory is consumed in this process.

Good luck!

 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

BI podcasts

by patricet28 14. May 2007 16:16
In addition to Jeff Raikes' webcast, there is a whole slew of interesting BI podcasts available here: http://www.microsoft.com/bi/resources/biconference.aspx

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Day 3 at the Microsoft BI Conference

by patricet28 12. May 2007 03:02

Today was the third and last day of the first ever (as Alex Payne likes to say it..) Microsoft Business Intelligence conference.

It’s been quite an interesting week and an eye-opener for me, as I discover the features and possibilities of PerformancePoint Server 2007. Last night I downloaded the CTP2 of PPS and started playing with it. I was afraid that the learning path would be quite high, but in the end, it turned out to be more approachable than I thought. I yet have to play with the budgeting and planning side of it, which I anticipate will be a little harder to grasp. I encourage people interested in dashboards and scorecards to really have a look at PPS Monitoring & Analytics, it is worth the time it took to download..

Anyway, back to the conference. Steve Ballmer, CEO of Microsoft, was on stage to talk about the place of BI in the software industry and emphasize two messages:

  • Microsoft is very serious about entering this market
  • Microsoft’s goal is “BI for the masses”. Democratizing BI with high-power, yet easy-to-use software components.

I liked his idea of presenting the world of personal productivity and the world of IT systems, with most of the improvements being done in between. The role of BI, as Microsoft sees it, it to make it easier for users to use their tools of personal productivity (world where they are comfortable) to navigate and analyze information extracted from IT systems. He also mentioned BI applications used internally at Microsoft, and frankly, I would have loved to see a demo of one of these systems.

He then outlined what he sees as the future for BI:

  • Self-service, transparent
  • Embedded BI
  • Guided analytics (to bubble up exceptions, outliers, potential problems)
  • New BI models to help drive business

His formal talk was only 30 minutes, and I wished it had been a little longer. I love Steve’s style, he is energetic and passionate about what he’s talking about. But today was probably not one of his best speeches, although the little he had to say, I found it interesting.

The rest of his time was spent on Q&A with Chris Caren, General Manager in the Office Business Applications division. I did not write down all the questions, but here are a few of them:

Q: Role of Microsoft Office in the future?

A: Office needs to migrate to a software+services world. We have the ribbon, but we need to make the UI even more approachable. Office needs to be more integrated with business and server offerings.

Q: Relationship with SAP?

A: Microsoft will continue to invest with SAP as a duet. There will be Microsoft solutions, SAP solutions and duets with combined Microsoft/SAP solutions. Microsoft will continue on making platforms interoperable.

Q: Changes in the industry and consolidation. Is this a good thing for customers/partners?

A: It’s the natural evolution of the business. There are three tiers of companies (pillars, large and small) and it’s a natural evolution to have consolidation between tiers.

Q: BI to create new service offerings?

A: BI is a huge opportunity in that space (example given in the financial services world, where a company can offer BI services to its partners/suppliers/vendors)

Q: What about BI as a service?

A: There are lots of things going on at Microsoft in the collaboration/communication area. BI as a service is also a huge opportunity for smaller businesses to get their hands on forecasting, budgeting and planning tools.

After Steve Ballmer’s keynote session, I headed to Donald Farmer’s talk on “Data Mining: A Platform for Intelligent Applications”. I was surprised to see that many people at a talk on data mining (hall 6E was 80% full). It may have to do with a more restricted selection of talks to choose from, but more likely, I think the subject has started to take off and resonate with customers. In my consulting engagements over the last year, I have picked more and more requests for data mining presentations.

Donald’s presentation was great and the demos he chose were right on the spot.

He first started by reminding the audience that data mining is not a single product offering from Microsoft, but rather a platform on which people can build intelligent applications, which are, according to his definition, applications that learn from the past and can respond to new information.

It was interesting to see the show of hands when he asked how many people in the audience were pure data miners. Only one person raised his hand. The rest of the audience was primarily composed of BI developers. This speaks volumts about the untapped potential that we have with this data mining platform.

After talking about the CRISP-DM process model (Cross Industry Standard Process for Data Mining) and showing how the Microsoft DM technology can be mapped to the various steps in the model, he moved on to show his first demo: Build a model to predict if someone could be a home owner, based on # of children, # of cars owned and yearly income. Somewhat the canonical demo for DM, but interesting though. The nice part was the introduction to DMX (the DM query language) to create, train and use data mining models (this is where I discovered natural prediction JOINs).

The next demo was a form entry validation demo, where the entries in a form are passed to a data mining algorithm for validation. This one was an interesting use of data mining. We often associate DM with prediction and some kind of magic algorithm, but this data validation demo provided a strong case for use of DM in existing customer environments. I showed very appropriately the use of prediction probability, not only to determine whether an entry is wrong, but also how wrong we think it is. I’ll definitely have to build that demo on my laptop…

To finish this very short half-day of the conference, I sat in the “Building BI solutions with Microsoft Excel 2007 and Microsoft Analysis Services 2005” session. T.K. Anand, Program Manager in the AS2005 team and Allan Folting, PM in Excel co-hosted this session. As a consultant on Microsoft BI technologies, the subject of this talk fell right into the heart of my current skills. Therefore, I did not learn anything, but nevertheless found the talk well articulated and fun to watch. T.K. and Allan perfectly played their parts and it turned out quite entertaining, as any well-rehearsed presentation should be. The talk was mostly a big huge demo that they kept building on, showing most personalization features of AS2005. Among the topics covered were:

  • Pivot table styles
  • Server-side filtering
  • Display of member properties
  • Contextual filters
  • Date filters
  • Translation of data and metadata
  • Perspectives
  • Calculations
  • Set creation
  • Actions and Drillthrough
  • Server side formatting and guided analysis
  • KPIs
  • OLAP tools and Intellisense on cube formulas

This is just on top of my head as I did not take notes towards the end of the session, but I encourage people to watch the demo on the conference DVD. It pretty much covers all personalization features of AS2005 that can be exposed in Excel 2007 and makes a compelling argument for tying the client piece (Excel 2007) with the server piece (AS2005). To all who are thinking of using Excel 2003 as the analytic client, have a look at what Excel 2007 has to offer!

This session concluded the Microsoft Business Intelligence conference. For a 1st shot, I think it was pretty good. Of course, there are tons of things that can be improved to reach the level of more established conferences like TechEd or DevDays, but the few discussions I’ve had on the floor with customers seem to show a great momentum for the Microsoft BI platform and I am sure the organizers will take the time to digest the feedback and bring us something breath taking for next year! I am really looking forward to receiving the conference DVD in a couple of weeks, so that I can watch the sessions I couldn't go to (like "Diagnosing AS2005 MDX query performance bottlenecks" or "Behind the scenes with PerformancePoint Business Planner Calculations" or "Real-world Microsoft BI implementations: Lessons learned the hard way" to mention only a few of them..)

I now have a couple of hours before flying back to Paris then I’m taking a week off, so take care and have a safe trip back if you’re also flying home!

-- Patrice.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Update to yesterday's report

by patricet28 11. May 2007 13:47

OK, so a source who shall remain anonymous (!) provided the missing information to my report:

  • Patrick Baumgartner, Program Manager in the PerformancePoint team, was the fellow who brilliantly demoed PPS with his Deal/No Deal demo
  • Due to the vast sum of information needed to write my report, I failed to mention Len Wyatt, PM in the SQL team, who did a great demo of an SSIS package calculating decimals of Pi. 

And to save you the trouble of searching the Internet, as an added bonus, here is the picture of Bill Baker with his blond wig (which I might says suits him quite well…)

 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Day 2 at the Microsoft BI Conference

by patricet28 11. May 2007 08:17

It has become very common for any Microsoft speaker - or for any public speaker for that matter - to say that he is ‘super excited to be here’. So common, that Alex Payne had to add an extra ‘I am extra super excited to be here’… J I thought that was funny as I recall saying exactly the same thing when I gave public presentations.

Onto the first keynote speaker. I did not have the pleasure of knowing Ted Kummert. He joined the Data Storage and Platform division long after I returned back to the French Microsoft office. But his keynote session made me want to know more about Katmaï, the next version of SQL Server. Ted has a clear and concise way of presenting things, and a pace and tone of speech that makes him very understandable by non-American people (such as myself). Some might say that I have a bias towards Microsoft speakers, but that’s simply not the case. In my 17+ years at Microsoft, I’ve attended numerous conferences and heard many Directors, GMs and VPs address large audiences. I could name a few who were simply boring to death (no, I won’t be giving any names..).

In contrast, Ted managed to keep my interest, although he was covering a subject that I knew quite well, pervasive BI. He painted a clear picture of the Microsoft BI stack and its evolution over the years and described it as an extensible platform that a) enables a broad reach of applications to be developed and b) empowers the end-user to make more informed decisions.

He was supported in his presentation with a demo by Donald Farmer (former Group Program Manager of Integration Services). Donald gave a very good demonstration of the Data Mining add-in for Excel 2007, first showing the integration with Excel (detect meaningful groups in a population of customers then publish back the information to a SharePoint server) then going on to showing the integration of the data mining technology in SSIS. I’ve seen this demo many times but in my opinion, Donald clearly highlighted the two facets of Data Mining: Collaborative tools for Information Workers and technology for IT pros. It makes a compelling argument for exploring data mining capabilities with Office12 and AS2005.

Ted came back on stage to talk about the upcoming release of SQL Server, code name “Katmaï”. 5 years went by between SQL Server 2000 and the release of SQL Server 2005. He wants to shorten the interval between two release cycles to something closer to 24-36 months. As I mentioned in yesterday’s report, it remains to be seen whether or not the market can absorb more frequent releases of an enterprise product like SQL Server. However, Ted made the point that Katmaï would extend the capabilities of Yukon in 4 areas:

  • As an enterprise data platform, with a unique declarative management framework
  • By extending the platform beyond relational, with the introduction of new data types, such as filestream or location-based types (think GPS coordinates)
  • With better productivity for the developer with dynamic development using LINQ and a new entity data framework, that would allow a developer to manipulate business objects and create a layer of abstraction from the relational model.
  • With pervasive insight, with the integration of the SoftArtisans technologies to create, view and edit reports directly in Microsoft Office.

To illustrate his talk, François Ajenstat, Director in the SQL Marketing organization, came on stage to show a preview of the integration of the SoftArtisans technology and do the 1st public demo of Katmaï. He demonstrated the use of the new GEOMETRY data type with a demo showing restaurant density by zip code. The integration of spatial queries in a Virtual Earth mockup was interesting and opened the door to a new breed of applications, but will it prove to be a compelling argument when selling the SQL Server platform and architecture against Oracle, DB2 and the likes? Time will tell, but this reminded me of the (numerous and long-running) discussions we had when I was working on (now defunct) WinFS applications…

Anyway, Ted concluded his keynote with the message that it was the best time to be in the data business and that there were tremendous opportunities to extend the reach of BI technologies. With that, I can only agree.. J

The next keynote speaker was Dr. Robert Kaplan, Professor at the Harvard Business School and developer of the concept of activity-based costing and balanced scorecard. I really like the concept of inviting external speakers to talk at Microsoft conferences. It gives a very different perspective on the technologies that come out of the product groups and provides a foundation for understanding and explaining the features set of the products we are advocating to our customers. But this presentation brought me back 20 years in time, and reminded me of the lectures I used to have when I was a student. There is absolutely no question that the content was well thought and well presented. It is just the that the slides (black font on white background) looked like the ones we produced 20 years ago and had nothing to do with a modern PowerPoint presentation where colors and catchy graphics might sometimes make up for poor quality content. Here, every word had meaning and the quality of the content primed over the esthetics of the slides themselves.

This being said, his talk was divided in three parts: 

  • Definition of a balanced scorecard and how it fits into a corporate strategy
  • The role of an IT organization in creating strategy-focused organizations (hint: IT is an enabler of business units and corporate strategy and must move from a culture of basic competency (read: operational efficiency) to a culture of business partnering with other business units within the organization).
  • The role of software in strategy management.

By far (at least for me), the most interesting part was the description of what it takes for a corporate strategy to be successful. People interested in the subject should definitely listen to the talk when the conference DVD becomes available. A rough summary is given below in the 5 points below:

  • Strong executive leadership is required for success in executing a strategy
  • The strategy has to be translated in operational terms. Strategy maps and balanced scorecard (available in PerformancePoint Server) can bridge the strategy implementation gap between the leadership team and front-line employees. Strategy maps articulate what a company is trying to accomplish while balanced scorecards measure efficiency of the action plans.
  • Align the organization to the strategy. Dr Kaplan compared the efficiency of an 8-men row team with a company with 8 business units. All rowers have to be aligned along the same goals in order to succeed
  • Motivate to make the strategy everyone’s job
  • Link the strategy to operations and monitor the rate of success (or failure) of the strategy at regular intervals, using KPIs, dashboards, balanced scorecards and other time-driven ABC reports

Again, like Michael Treacy’s talk, I thought this one had a different flavor from traditional Microsoft presentations, and I praise the organizers of the conference for inviting external speakers. Definitely an experience to reproduce for future editions of the Microsoft BI conference. 

Given my desire to focus on PerformancePoint for this conference, I had high expectations for the next session I was attending, called “PerformancePoint Server Business Modeling and Planning”. Peter Bull, Group Program Manager, gave the talk but his speaking pace (too fast) combined with his slides (way too crowded) made it (very) difficult (read: impossible) for me to follow. He was certainly very knowledgeable on the subject, but I couldn’t listen to him and read the slides at the same time. By the time I was done reading the slide (with a font size so small, I had to make extra efforts to read), he’d moved onto another topic. I must admit he lost fairly quickly and although I listened carefully and stayed until the end of the session, I am not sure I understood half the words he said… A sensation unfamiliar to me, I did welcome the end of the session…

As an AS200x consultant, the chalk talk about “Solving business problems with MDX” brought me back into more familiar territories. Richard Tkachuk, Akshai Mirchandani and Robert Zare hosted this CT during lunch time. Although I do recognize the speakers for their knowledge of the product (and they’ve helped the BI community many time over), I have mixed feelings about this session. I am not convinced the format of the session (chalk talk with open questions) was very conducive to solving real problems. I think a formal breakout session with carefully selected issues and well-documented solutions that people would use as a reference would be a better conduit for exposing and solving business problems. 

This being said, there are a couple of take aways from this session:

  • Loops in MDX. There is no such thing and MDX is not like T-SQL or C#. However, loops can be simulated in multiple ways: a) use PASS calculations, b) use GENERATE to create a set and iterate over that set, c) use a stored procedure that calls out to the AS object model
  • Use of subSELECTs: they are used in Excel 2007 when you use a filter. However, one thing to keep in mind is that if the Sub-SELECT refers to something outside of the sub-SELECT, Visual totals are turned off. We always return the grand total.
  • Best practice with parent-child hierarchies: Hide the key attribute of the P/C hierarchy (the child attribute) and only allow the parent to be visible. Hiding multiple attributes can lead to unpredictable results
  • Rolling 12 months measure: 1) define a calculated member in the time dimension that can be used with multiple measures (this is the way the time wizard works) or 2) create a named set with the last 12 months and SUM on this set
  • Can’t do multi-SELECT on calculated members in Excel 2007. You have to switch Excel 2007 to be in compatibility mode
  • Reference dimensions do not behave the same way as regular dimensions with regards to AUTOEXISTS. Apparently, there is no easy way to do this in MDX. It was suggested that may be EXISTS on a measure group could be used.
  • Emulate semi-additive measures when using Standard Edition of AS2005. The only way is to write an MDX script to determine the last non empty child.
  • Non_Empty_Behaviour and calculated measures. I did not really like the answer that was provided that basically suggested that we should avoid using it or at least be VERY careful when using it, as it was misused most of the time. Calculation times can really go off the chart if the Non_Empty_Behaviour directive is not used, and I expected more than this vague answer. The performance guide and Mosha’s blog were mentioned, and I agree these are to-date the most knowledgeable sources of information on this topic, but the AS team should have provided a little bit more that what they did during the chalk talk. 

“Delivering intelligence through MOSS 2007” by Peter Petesch, Enterprise Technology Architect at Microsoft, promised to be an interesting session and there were certainly a number of people who agreed with me, as the session was pretty well attended. However, the demos were very simplistic. Peter sometimes only showed the available options in the menu, and did not even use them. The various sections of the presentations seemed like a collage of each product datasheet.

His talk covered Reporting Services integration, ProClarity integration, Excel Services integration, BSM/PPS integration and AS integration. Maybe I am being overly skeptical, but I did not understand what made SharePoint 2007 so special. OK it can render data that comes from different sources, so what? I left the session uninspired and a little bit disappointed, without a really good understanding of all the fuss around MOSS. There must be something special about MOSS, otherwise people would not be rushing to try/implement it. Peter’s session just did not convey that excitement… 

The same cannot be said about the last session of the day: “BI Hour” with Bill Baker (GM, OBA), Donald Farmer (PM), Brian Welcker (GPM), G. Taylor (Hitachi Consulting) and another member of the BI team that I did not know (I apologize for not having caught his name when it appeared on the slide). The theme of the session was “no slide, demos only and a (moderately organized) chaos. The idea was to use demos to present in a fun way the various Microsoft BI technologies.

  • Donald Farmer presented his “Magic 8-ball” SSIS package, in an attempt to measure the quality of a data mining algorithm. You ask a random question and the magic 8-ball provides an answer that can be compared to the one out of a data mining algorithm…
  • Brian Welcker showed his implementation of a custom reporting services control that surfaced in a fun way (a tic-tac-toe game) the sales results of the AdventureWorks sales people
  • Hitachi Consulting presented a Reporting Services mobile framework solution that took BI to mobile devices
  • The youngest member of the BI team (which name I did not catch) managed to have Bill Baker wear a blond wig (photos will undoubtedly make the rounds on the internet. Stay tuned!), put a stocking on his head (or what looked like one from where I was seating) and showed a PerformancePoint implementation of the Deal/No deal game. 

Between each demo, they threw goodies (at) to the audience. People who showed the more excitement were more likely to be thrown something, so you can imagine the atmosphere in the room… At the end of the session, I managed to get a BI Power Hour T-Shirt. I would have preferred the more discreet black BI Conference polo the speakers were wearing (which you can wear while visiting customers), but hey, I’m super excited (…) I got the BI Hour T-shirt. This one will surely become a collectible!

That session was an awesome way to finish the day and I really enjoyed the concept of demoing technologies while having fun. Kudos to the team for thinking of this!

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Products availability announcements

by patricet28 10. May 2007 17:15

Ah.. Someone just reminded me that I forgot to mention three “small” details in my report from the Microsoft BI conference.

  • Jeff Raikes announced that the next version of SQL Server, code name “Katmaï” would be available in 2008 (note: as many of my customers are still running SQL Server 2000, it’ll be interesting to see Katmaï's adoption rate…). More information on Katmaï here: http://www.microsoft.com/sql/prodinfo/futureversion/default.mspx
  • He also announced the acquisition of SoftArtisans Inc.’s OfficeWriter for managed report authoring in Microsoft Office (http://www.softartisans.com)
  • General availability for Office Performance Point Server 2007 is scheduled for November 2007, with a CTP3 due around mid-June. The RTM version could be downloadable from MSDN as early as September 2007.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Day 1 at the Microsoft BI Conference

by patricet28 10. May 2007 07:26
Today was the first day of the Microsoft BI conference in Seattle.

The day started with a series of keynote sessions, the first one by Jeff Raikes, President of the Microsoft Business Division and the second one by Michael Treacy, author of many books on corporate strategy analysis and former MIT professor.

Alex Payne, Group Product Manager in the BI team, introduced both speakers and it was finally good to be able to put a face on a name that comes very often in the Microsoft BI circles..

Jeff Raikes, as usual, made a good presentation and his “BI as a self-service for knowledge workers, using their day-to-day-tools” really resonated (at least with me..:). He was supported by 2 demos, one showing the end-to-end BI story (by Christina ???) and the other one about Performance Point Server by Bruno Aziza. Let me take this opportunity to vent a little bit about something that really bothered me during the demos. Besides, it seems to be a common theme here, as I had the same complaint about other sessions too. Isn’t it well known that when you’re demoing to a large audience, you should increase the fonts size so that the text can be readable from the back of the room? I was sitting about halfway through the room and I could barely read what was on the screen.. I can’t imagine what people sitting at the back could see.. It’s either because I’m too old, or because the fonts size was way too small.. Other than that, the session was good for a keynote session.

I particularly appreciated the second keynote session, by Michael Treacy. I don’t know if it was because the subject was less familiar to me than the Microsoft strategy/product roadmap or because the tone of the speaker was different from the one used by traditional Microsoft speakers. In any case, I found that his narrations about corporate strategy and productivity improvement were a welcome change from traditional keynote sessions. This made me want to go out and buy his books “The Discipline Of Market Leaders” and “Double-Digit Growth: How Great Companies Achieve It – No Matter What”. His theory is that speed of learning, innovation and strategic adaptation to market changes are the ultimate characteristics of companies that can sustain double-digit growth across a long period of time. Among his examples were Toyota (for their reliability efforts) and Microsoft (for their capacity of transitioning to new waves: DOS->graphics, Client-Server, Internet and now BI). This session is one that I’ll definitely watch again when the Conference DVD goes out… The only thing that went above my head was his joke about him being from Boston and something about the Red Skins. I’m pretty sure that only the American part of the audience fully appreciated this cultural joke..

After the two keynote sessions, we went into breakout sessions time.

For this conference, I decided to get more familiar with the Performance Point Server offering. Coming from an AS2005 background, that was a natural thing for me to do. However, I did also want to attend the chalk talks about Analysis Services. Unfortunately, this brought some conflicts into my schedule, as most of the interesting AS2005 chalk talks happened at the same time as the Performance Point chalk talks. For example, at 10:30AM, I had “Real-time BI and advanced process setting for AS2005”, “PPS Planning architecture” and “Large scale AS implementations: lessons learned” at the same time. Because I could catch the PPS talk and the large scale AS talk later in the conference (although not with Dave Wickert as the speaker), I decided to attend the real-time BI CT. Paul Sanders did explain incremental dimension processing and proactive caching, but I guess I already knew too much about that subject for it to totally capture my interest. I wished he’d talk about slowly changing dimensions and provide more details on challenges around incremental processing, but that was probably too much for the audience.  Retrospectively, I probably should have gone to Dave’s chalk talk.. Oh well, I’ll catch him on the conference DVD..

I then went to the “Microsoft Office Performance Point Server 2007 Monitoring & Analytics architecture” talk.. Whff.. What a name! Russ Whitney did a good job of presenting the architecture of the Dashboard designer and the runtime environment. Based on the questions that were asked, it seems that most of the audience was already familiar with Business Scorecard Manager 2005 and ProClarity Analytics. Myself not being an expert in any of those two products (I only have a theoretical knowledge of both), I sometimes did not fully understand the questions that were asked, but this will change, I swear..:-) I found interesting the ability to bring together structured data and unstructured data in the same scorecard (Actuals in an AS2005 cube with Budgeted in an Excel spreadsheet for example). I definitely have to have a closer look at the CTP2 of Performance Point Server. CTP3 should be availablearound mid-June.

In the afternoon, I had to make a tough choice between Robert Zare’ session on “Practical Design Techniques for Modeling Common Business Scenarios” and Chris Webb’ session on “Writing MDX for KPIs”. In the end, I decided to go to Robert’s session because of most of my engagements as a consultant focus on modeling (vs. efficient MDX writing). Boy, that session was crowded… Robert was speaking in the 6E grand ballroom and the room was almost 80% full. This was an interesting presentation of the Adventure Works sample database and its associated BI modeling scenarios. Unfortunately, I thought he went too quickly over certain aspects that, in my opinion, can be the source of many performance issues, such as many-to-many relations, currency conversion, use of measure expressions. When you listen to him, everything seems easy, whereas, in the real life, it’s not.. His session, rated a 200 level, was more of a 300-level session.

To finish my day, I attended Elaine Andersen and Rex Parker’ talk on “PPS Monitoring and Analytics”. This session had a good mix of slides and demos. Just starting with Performance Point, I appreciated the demos that built one upon another. Elaine started with the creation of a basic scorecard, sourcing the data from an AS2005 cube. Rex then added integration of a thin chart, an analytic grid and a strategy map. He then moved on adding parameterization to the scorecard before Elaine finally completed the demo with integration of Excel non-structured data (budget information shown along with AS2005 actual data). CTP3 should be exciting with integration of the report types, such as Excel services views.. I can’t wait to download the bits and try them on my laptop!

That’s it for today. Tomorrow will be another full day..

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

TextBox

Tag cloud

RecentPosts