Sunday, March 29, 2009

Dojo Chart Library

I read an article about Dojo Now With Chart Tools by Mattew Russell, the author of the book Dojo The Definitive Guide by O'Reilly.

The article lists several demos on generating charting by Dojox.charting library. I then started to copy the codes to an HTML file. Instead of loading Dojo libraries to my local computer even I have a personal web site on my Mac, I used xDomain reference to Google's Dojo CDN library. In this way, I don't need any web server and I can send this demo HTML to my colleagues to share it. My colleagues at my work really enjoyed it.

This demo does not only lists original JS codes. I enhanced the charts with different CSS styles and colors. It is so easy to do that. In addition to that, I allocate an 600x600 area at top as an area for chart and list my demos a UL list underneath it. This arrangement needs to clean the area for any chart previously displayed. The Dodo library does not provide APIs to clean chart. I finally found a way to clean it: simply removing all the childen nodes in the area. Even this one works, but it is not recommended. An Dojo insider pointed out this method does not clean all the objects created for the chart. I posted my question to StackOverflow and Dojo's forum.

My previous blog also mentioned the issue about xDomain reference. All these were really challenges for me since I am new in Dojo. I have not done much development in Dojo. However, the demo and enhancements let me learn a lots.

Here is my demo on Google's Code.

Read More...

Saturday, March 28, 2009

FireBug and xDomain Reference for JS APIs

Last week I read a very interesting article on Dojo's charting library. There are some demo codes in the web page. Instead of downloading Dojo's library to my local computer web site, I choose to use CDN fashion to load Dojo library from Google's CDN. This method is called Cross Domain Resource Reference or xDomain Reference.

In this way, I don't even need a web server. I just use VI to write an HTML file with JavaScript and Dojo API calls. That's the basic of web application: HTML + JS. I dropped the HTML file to my browser (FireFox) and it worked right away.

However, for the same codes which I found them in Dojo's ToolKit web page, one of feature does not work. It is the animation of curve chart. This feature is added by calling Magnify() API function to the chart. Behind sense, animation parts are created to each point. In the Dojo's web page, the magnification works fine but not in my HTML.

I spent about one day's time finally I figured it out. What I missed is to add the following codes to the head section of HMLT:

<head>
...
<script type="text/javascript">
...
dojo.require("dojo.fx");
...
</script>
</head>


FirBug helped me to find this one out. When I checked my HTML source codes in FirBug, the script loading section shows the loaded source codes in FireBug window. However, for the xDomain reference HTML page, the source codes are partially and they are displayed as one very very long line like this:


As you can see how hard to read when I copied the codes to VI:



Fortunately, the same codes are available on DojoToolKit web page, where Dojo's library files are not xDomain references. They are in the same domain. By using FireBug to take a peek inside the web page, I found the Magnify.js source codes are in nice layout:



By the way, I also tried to get source codes directly from Google's CDN web site, for example, Dojo.xd.js. It also displayed as a very very long line. But I went to DojoToolKit's web page to load the source code from their domain, I was prompt to save or open js. The download js file is in nice layout format. Here it reveals that CDN is not only for fast downloading, the size of the file is also very small. All the unnecessary codes such as line breaks are removed!

Read More...

Sunday, March 22, 2009

Using CTE in TSQL

Last week I found CTE (Common Table Expressions) when I posted a question to StackOverflow. CTE is an ANSI SQL-99 standard and it was added to Microsoft SQL Server 2005.

For me, it was new. I applied answers to my TSQL codes and then I really like it. It is so simple and readable. With its recursive power, I resolved my issue in a couple lines of codes. Today I further googled out more information about this and here are some links with good view of CTE:


Another example I tried was to replace T-SQL cursors by using both CTE and a table variable:
DECLARE @row INT;
DECLARE @rowMax INT;
DECLARE @myTable (
id INT IDENTITY(1,1) -- key with auto creament
name VARCHAR(100),
dt DATETIME );
-- Using CTE to select data into @myTable
WITH CTE_Temp(name, dt) AS
(
-- SELECT to hold a temp set of data
SELECT tagName as name, dt FROM myTagTable
WHERE version = 0;
)
INSERT INTO @myTable (name, dt) -- Insert to @myTable
SELECT name, dt FROM CTE_Temp;
SELECT @row = MIN(id), @rowMax = MAX(id)
FROM @myTable;
WHILE (@row <= @rowMax)
BEGIN
-- do someting about data in @myTable
SET @row = @row + 1; -- move to next row
END

Of course, this can be done with SELECT INTO FROM statement. This is just an example to use CTE to hold a section of data from a base table.

The power of CTE is its unique feature: recursiion. Within the WITH block, you can define two basic queries: base or anchor query and recursive query. Here is an example of Split like function to split a string by delimiter into a table as return:
CREATE FUNCTION [dbo].[udf_Split]
(
-- Add the parameters for the function here
@p_StringList NVARCHAR(4000),
@p_Delimiter NVARCHAR(512) = ';'
)
-- Return table definition
RETURNS @Results TABLE (
position int NOT NULL,
item NVARCHAR(MAX)
)AS
BEGIN

-- Use WITH statement with recursive power
WITH Pieces(pn, start, stop) AS
(
-- start from 1, 1, stop(=CHARINDX())
SELECT 1, 1, CHARINDEX(@p_Delimiter, @p_StringList)
UNION ALL
-- next to update pn, start and stop values
-- by recursive call

SELECT
pn + 1,
stop + 1,
CHARINDEX(@p_Delimiter, @p_StringList, stop + 1)
FROM Pieces
WHERE stop IS NOT NULL AND stop > 0
)
-- Test the result
-- SELECT * FROM Pieces;

INSERT INTO @Results
SELECT
pn, -- as position
SUBSTRING(@p_StringList, start,
CASE WHEN stop IS NOT NULL AND stop > 0
THEN stop-start
ELSE LEN(@p_StringList) END)
AS s -- as item by using SUBSTRING() function to get sub string from Pieces
FROM Pieces;

RETURN;
END

Read More...

Saturday, March 14, 2009

Use sp_who2() to Get Current Log-in User Information

Recently I have been working on Microsoft SQL Server 2005 to resolve some security issues. One question was to get the current log in user information.

As usual, when I cannot find an answer in 10 minutes, I post my request to StackOverFlow web site. I posted my question early in a morning before I went to my work. Quickly I got several answers when I arrived to my office by biking. I tried all of them and found out sp_who2, a undocumented stored procedure by Microsoft SQL Server, is the tool to get information I need. By using this tool, actually I can find out who is using the SQL server anytime. For example, if a critical data refreshing or migration job is scheduled, I could add this tool to find out if there is any user access to SQL server and send out warning emails if any one there, before the job is about executing.

It is very easy to user SP:

EXEC sp_who2;

The SP returns a list of log in users in a table view if you use Microsoft SQL Server Management Studio's query. However, I only want to see related column information and filter users by WHERE and ORDER BY clauses. Then I found out a way to define a table and output the result to a variable table like this:
DECLARE @retTable TABLE (
SPID int not null
, Status varchar (255) not null
, Login varchar (255) not null
, HostName varchar (255) not null
, BlkBy varchar(10) not null
, DBName varchar (255) null
, Command varchar (255) not null
, CPUTime int not null
, DiskIO int not null
, LastBatch varchar (255) not null
, ProgramName varchar (255) null
, SPID2 int not null
, REQUESTID INT
)
INSERT INTO @retTable EXEC sp_who2
SELECT Status, Login, HostName, DBName, Command,
CPUTime, ProgramName, BlkBy AS [Last CMD Time] -- *
FROM @retTable
--WHERE Login not like 'sa%' -- if not intereted in sa
ORDER BY Login, HostName;

To view all the user information, you need to login as sa or user with sa administrative privileges. Otherwise, you may only see yourself or limited information.

Read More...

Thursday, March 05, 2009

Using EXEC() AT Continued

My previous blogs on this topic demonstrate a way to use EXEC(...) AT ... to pass through a query to a linked server. The performance of this method is much better than a T-SQL query directly with the linked server. I tried it with an very big Oracle database table with great speed.

Today, I found another very interest issue with this pass-through query method. If the execution on the remote server has any error such conflict with constrains or wrong field name, the execution does not stop. The errors will be thrown at the end of execution. For example, the following codes will be executed completely:

DECLARE @sql NVARCHAR(MAX);
DECLARE @myCount INT;
-- Initialize setting
SET @myCount = -1; -- Initailize it
-- Build sql query

SET @sql = N'
BEGIN
SELECT COUNT(*) INTO :myCount
FROM owner.myTable
WHERE id1 = '
+ CAST(@id AS VARCHAR) + N';
END;'
;
-- id1 is an incorrect field name
EXEC (@sql, @myCount OUTPUT) AT linedOracleServer;
PRINT 'Count: ' + CAST(@myCount AS VARCHAR);
-- Prints Count: -1 Not stopped!


I tried similar codes with T-SQL directly using the lined server:
DECLARE @myCount INT;
-- Initialize setting
SET @myCount = NULL;
-- Use T_SQL query
SELECT @myCount = COUNT(*)
FROM linkedOracleServer.owner.myTable
WHERE id1 = 1;
-- NO prints, syntax error right away!
PRINT 'Count: ' + CAST(@myCount AS VARCHAR);


I found this interesting problem when I run a stored procedure with EXEC() AT to update value on Oracle side. It runs fine but today I found one error at the end of execution. The SP did not stop. Finally I figured out there were several violations of constraints. Since the whole stored procedure was completed and no exception to stop the program, the continuous codes set flags to mark insertion successful in another log table.

I tried to run a test scheduled job with this problem SP. The job log does indicate exception and step failure. However, the SP was executed completed.

EXEC() AT is a good way to leverage a remote server's full power; however, you have to be very careful about the process. Test your codes thoroughly before putting it into production. Another way may be to schedule a job to run the codes. If there is any failure, rollback any changes.

Read More...