Memories of DATA & BI Summit 2018 | Dublin, Ireland.

The journey which we have thought in our mind but couldn’t make it happen due to different things. But this year I got an opportunity to represent myself, my country Nepal and overall Nepal Power BI User Group in DATA and BI Summit 2018 which was held in Dublin, Ireland on 23rd April to 26th April, 2018 at The Convention Centre Dublin.

https://www.databisummit.com/home

https://www.pbiusergroup.com/kathmandu

I was also appointed to be a Program Committee member for this DATA and BI Summit 2018 in Visualize Track, where we team up in group to review different abstracts from different great speakers around the world. We work really hard to finalize the content and selected the best suitable sessions for this summit with the help of Rose, Mark and mainly all the members of Program Committee. During these abstract review process I came to know different tools, techniques, process and methods related to different Microsoft Products mainly focusing PowerBI visuals. It was in deed a great opportunity and responsibility to be a Program Committee of any Summit or conference. We must thanks to all the members of Program Committee Team.

https://www.pbiusergroup.com/about/programming

I was also given an opportunity to be a speaker at this summit and I really got happy when I knew this. And then my journey begins to attend DATA & BI Summit 2018 as a Speaker from my country Nepal.

This time DATA and BI Summit 2018 is held at Dublin, Ireland and getting visa from Nepal to Ireland or European Visa is difficult for developing countries like us Nepal. I don’t know why the process took nearly 1 month to get a visa from Nepal to Ireland though I finally got the visa for which I had prepared all the necessary documents and process followed as per VFS Nepal in order to get Ireland visa. Thanks again for Ireland Embassy ,India for granting me for conference visa since in Nepal there is no Ireland Embassy located here.

If anyone wants to attend these types of Microsoft conference or other conference in Ireland from Nepal then you can follow below VFS links. (Also don’t depends on VFS only since my case they said visa process will take maximum 15 days but which took me nearly 1 month, luckily I had applied nearly 1 month before my conference date).

http://www.vfsglobalirelandvisa.com/india/how_to_apply.html

Had a great experience about Ireland, people, speakers around the world, MVPs, PUG Leaders   and Microsoft Families and also Nepalese brothers from my own hometown Siddhipur ,Lalitpur who helps me a lot while I was in Ireland. Really thanks to all for being so humble and helpful.

Also, I had posted as below after the DATA & BI Summit 2018:

It was full of excitement and happy that I got a chance to meet some of my idols Jen Stirrup , Chris Webb , Reza Rad , leila Etaati ,Data Geeks ,BI hero’s ,MVP, and Microsoft Power BI Team . Thanks a lot to Rose Spitzer , Jena Coffie , Mark Wilcock and Microsoft PUG team for giving me this opportunity. Feeling happy happy 🙂 .#DataBISummit #PowerBI #PUG #MicrosoftYou can find more on below link

https://www.databisummit.com/home

The official Thank you to DATA & BI Summit Volunteers Video:

Some of the snaps during my journey to DATA & BI Summit 2018, Dublin, Ireland.

https://www.instagram.com/p/BiCt3zsFQiZ/?taken-by=anil_maharjan

Using Microsoft Azure Stream Analytics and Power BI: Real-time Telco fraud detection

This post is referenced from Microsoft Azure main articles, during my free time I want to try out Azure Stream Analytics and know more about the Stream Analytics where I found two great articles which help me to understand in detail:

Below are those articles:

1) https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-real-time-fraud-detection

2) https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-power-bi-dashboard

Here one can learn related to Azure Steam Analytics and how Power BI visualization dashboard can be used for real time data analysis and helps to visualize fraudulent phone calls that are detected by a Streaming Analytics job in real time. Since, I am also currently working in Telco sector so this analysis helps me a lot.

This tutorial provides an end-to-end illustration of how to use Azure Stream Analytics. You learn how to:

  • Bring streaming events into an instance of Azure Event Hubs. In this tutorial, you’ll use an app that we provide that simulates a stream of mobile-phone metadata records.
  • Write SQL-like Stream Analytics queries to transform data, aggregating information or looking for patterns. You will see how to use a query to examine the incoming stream and look for calls that might be fraudulent.
  • Send the results to an output sink (storage) that you can analyze for additional insights. In this case, you’ll send the suspicious call data to Azure Blob storage.
  • Also send the results to an output sink (Power BI) where you can analyze for additional insights. How one can build real time telco fraud detection visualization dashboard in Power BI.

In this tutorial, we use the example of real-time fraud detection based on phone-call data. But the technique we illustrate is also suited for other types of fraud detection, such as credit card fraud or identity theft.

Scenario: Telecommunications and SIM fraud detection in real time

A telecommunications company has a large volume of data for incoming calls. The company wants to detect fraudulent calls in real time so that they can notify customers or shut down service for a specific number. One type of SIM fraud involves multiple calls from the same identity around the same time but in geographically different locations. To detect this type of fraud, the company needs to examine incoming phone records and look for specific patterns—in this case, for calls made around the same time in different countries. Any phone records that fall into this category are written to storage for subsequent analysis and also send those results to an output sink (Power BI) where one can analyze for additional insights and helps in real-time fraud detection based on live phone-call data .

I have followed step by step by following above article 1 and article 2, so one should go in details through those articles. Below are some snap from steps I followed:

Now, Streaming Analytics job starts looking for fraudulent calls in the incoming stream. The job also creates the dataset and table in Power BI and starts sending data about the fraudulent calls to them.

Once we finished creating Azure Stream Analytics job and output of that job to Power BI sink table name as ‘Telco_Fraud_Demo’ inside Data and BI Summit 2018 Workspace in Power BI Service. After that one can create real time visualization dashboard in Power BI.

For this just login to Power BI Service, and then under your Workspaces in my case ‘Data and BI Summit 2018 ‘ create dashboard by clicking +Add title tab where you will see Custom streaming Data in REAL-TIME DATA section and then once you select Custom Streaming Data you will see the same output power BI sink table which we had created in Azure Streaming Analytics Job Output

By selecting different visualization Type we can visualize and analyze for additional insights and helps in real-time telco fraud detection based on Streaming dataset.

Also, in near future we can directly connect the dataset as Streaming dataset in near future as per below Power BI feature notice.

Thanks,

Anil Maharjan

Senior BI Engineer | Nepal Power BI User Group Leader

How to create Microsoft Azure SQL Database and Visualize along with Power BI.

This blog is mainly about creating Microsoft Azure SQL DB and visualizing with the help of Power BI Desktop.

Firstly, we need to have a Microsoft Azure subscription in order to create the azure sql database.  One can try for free for one month worth $200 azure subscription for free. You can find more on this from below link:

https://azure.microsoft.com/en-us/free/?v=18.03

Then, you need to create Azure SQL database. For this please follow below link which will help us to create Azure SQL database step by step. I have gone through each step in detail.

https://docs.microsoft.com/en-us/azure/sql-database/sql-database-design-first-database

Below are some of steps snap which you will see while creating Azure SQL database.

 

Meanwhile, It will help if you know more about Database Transaction Units (DTUs) and elastic Database Transaction Units (eDTUs). Once can know more on below link: which helps to figure out the Azure SQL database monthly subscription charges.

https://docs.microsoft.com/en-us/azure/sql-database/sql-database-what-is-a-dtu

Also, one can know more about the elastic database pool.

https://docs.microsoft.com/en-us/azure/sql-database/sql-database-what-is-a-dtu

Once you successfully create the Azure SQL DB then you will see as similar to below snap. Here I have create AzureSQLDB as database name and with AdventureWorks test data sample tables. I have set my server name as anilmaharjan.database.windows.net with 2GB in total disk storage.

After that I tried to connect my Azure SQL  database ‘AzureSQLDB ‘  but got the below error. Which means we need to allow client IP address to access the windows azure server SQL DB. So for this we need to add Firewall settings for this . Once success Firewall setting, now you are good to go to connect Power BI Desktop to windows Azure SQL DB you just created.

Do visualization by using Power BI Desktop and then can upload in Power BI cloud service too. Find more on different kind of Power BI visualizations:

https://docs.microsoft.com/en-us/power-bi/power-bi-visualization-types-for-reports-and-q-and-a

Now, you had successfully created Azure SQL database and do different visualization using Power BI . Also, it’s good to see the data explore option with Azure and we can direct query and analysis of Azure SQL database in Azure portal itself. Which is great. One can setup Azure cost management in order to optimize your cloud spend and which is also a quite good feature.

#AzureSQLDatabase ,#WindowsAzure, #PowerBI,In summary, you have successfully setup Azure SQL database and do different kinds of visualization by using Power BI Desktop.

Thanks,

Anil Maharjan

Senior BI Engineer | Nepal Power BI User Group Leader

SQL Saturday #692 Conference and Nepal Power BI User Group Meetup a big success.

Last Saturday, we conducted SQL Saturday #692 conference and Nepal Power BI User Group Meetup Event in Kathmandu, Nepal successfully on 23rd Dec, 2017.

SQLSaturday#692 conference and Nepal Power BI User Group Meetup Event is free event for a Microsoft Data Platform professionals and those wanting to learn about SQL Server, Business Intelligence, Power BI and Analytics.

I got a chance again to organize and speak in this great event and it was fun and we share knowledge along with other SQL and Power BI Geeks and also had great opportunity to be a speakers around the world. Thank you all speakers , MVP & members for being part of #SqlSaturday692 #SqlSatNepal #NepalPUG we had a great speakers , MVP  from around the world. Such as Shree Prasad Khanal Anil Maharjan Gogula Aryalingam Guy Glantser Jonathan Stewart Deependra Bajracharya Virendra Dibya Tara Shakya  

I had given presentation on PowerBI: Data Visualization SQL Saturday with R You can find my slides using below link: http://www.sqlsaturday.com/692/Sessions/Details.aspx?sid=69982

Also, it was my great opportunity to share more about Nepal Power BI User Group along with the community. How one can engage and join the NepalPUG group. Currently, there are few members in Nepal PUG and hope it will grow soon. http://www.pbiusergroup.com/kathmandu

The event was a big success and we had nearly more than 100 participants and the interactions between participants and speakers was really overwhelm.

In future also, we will conduct more Data Platform SQL and Power BI related events again soon. Also, one problem we faced difficult here is not having any Microsoft Office premises in Nepal. It would be great if Microsoft can open up small office premises here in Nepal too. So that we can conduct these kind of Microsoft related conference within Microsoft office. Which will help the Microsoft community to grow further and will help to engage more community members in one place.

Thanks,

Anil Maharjan

Senior BI Engineer | Nepal Power BI User Group Leader

Speaking at SQLSaturday Nepal SQLSaturday#692 and Nepal Power BI User Group Meetup.

Firstly I am happy and excited that I will be speaking at SQLSaturday#692 and Nepal Power BI User Group Meetup on Dec 23 2017. This is my fourth time speaking for these International events and I’m really excited that I will be speaking this time also.

I will be speaking on ‘PowerBI: Data Visualization SQL Saturday with R’ here is my abstract detail:

This session is mainly use to learn more about Power BI and R visualization chart. From this session one can learn how can we make some simple and quick visualization using Power BI desktop taking real SQL Saturday data and publish in PowerBI cloud service and also publish those visualization reports to publicly in web. Also, this session mainly helps to tell the story of SQL Saturday by using Power BI and R visualization chart.

Overall, by using Power BI Visualization how one can find out which SQL Saturday is conducted in which state or country in which year and a particular Day along with total sessions conducted. Also one can find SQL Saturday trends year on year and mostly in which month most of SQL Saturday is conducted as per SQL Saturday data history. So, this session will surely help one to learn regarding Power BI and its capabilities and SQL Saturday Stories.

So, What is SQLSaturday?

SQLSaturday is a free training event for Microsoft Data Platform professionals and those wanting to learn about SQL Server, Business Intelligence, Power BI and Analytics.Please register soon as seating is limited, and let friends and colleagues know about the event.

This event will be held on Dec 23 2017 at Hotel Yellow Pagoda, Kantipath, Kathmandu, Nepal.

Please do register and be a part of this great event.

http://www.sqlsaturday.com/692/EventHome.aspx

One can join our local SQL Server User Group ‘Himalayan SQL Server User Group’.

http://www.sqlpassnepal.org/

So, What is PUG?

PUG offers online and in-person communities where you can share best practices, take part in exclusive training opportunities, and connect with other passionate Power BI users from various professions and industries. Get involved in your local user group today, and gain a better understanding of data that will enable you to excel in your role.

One can join our local Power BI User Group ‘Nepal Power BI User Group’

http://www.pbiusergroup.com/kathmandu

Hope to see you there and don’t forget to say hello to me in the event.

Thanks,

Anil Maharjan

Senior BI Engineer

http://np.linkedin.com/in/maharjananil

Speaking at SQL Server Geeks Annual Summit 2016, Asia’s Only Data & Analytics Conference.

Firstly I am happy and excited that I will be speaking at SQL Server Geeks Annual Summit 2016, Asia’s Only Data & Analytics Conference. I have been selected as a Speaker from Nepal and am representing as only one speaker from Nepal.

http://www.sqlservergeeks.com/ssgas-2016-anil-maharjan/

About the Conference:

SQLServerGeeks Annual Summit 2016 (SSGAS 2016) is Asia’s Only SQL Conference focusing on Microsoft Data. Scheduled from Aug 11-13 (Pre-Con on Aug 10) at NIMHANS, Bangalore, the summit will see 120+ sessions being delivered by 50+ speakers across 3 days. Joseph Sirosh (CVP, Data Group) will key-note the conference. Complete MS Data Platform stack is being covered at the summit. Speakers include Group PMs, Senior PMs, PMs, Premiere Field Engineers, Escalation Engineers & Data Architects from MTC. SQL CAT, SQL TIGER & Global Black Belt Team from Microsoft will deliver top-notch content and spend quality time with attendees at the convention center.

More details are here: http://www.sqlservergeeks.com/summit2016

I am Speaking

Why one should attend this conference:

  • To get real-world training from industry experts
  • Know the latest trends in Data & Analytics world
  • Special focus on Analytics, Cloud & Big Data
  • To network & connect with the MVPs, MCMs
  • To learn from SQL product team, Redmond
  • Direct access to product team members
  • Benefit from new delivery formats like Open-Talks & Chalk-Talks*
  • Expert level demo-oriented sessions
  • Five parallel full-day classroom training

Please do register and be a part of this great event and follow #SSGAS2016 on Twitter for more news.

Hope to see you there and don’t forget to say hello to me in the event.

Thanks,

Anil Maharjan

Senior BI Engineer

https://www.linkedin.com/in/maharjananil

SQL Server 2016 Discovery Day – Data Visualization using R and Power BI.

SQL Server 2016 Discovery Day – Data Visualization using R and Power BI.

Last week we conducted the SQL Server 2016 Discovery Day –Release Event Kathmandu, Nepal successfully on July 9th 2016. SQL Server Launch 2016 Event and Discovery Day is a free, one-day event where individuals come together, learn about SQL Server 2016 and solve a pre-determined problem.

https://www.eventbrite.com/e/sql-server-2016-discovery-dayrelease-event-kathmandu-nepal-tickets-25888946536?aff=efbnreg#

I got a chance again to speak in this great event and it was fun and share knowledge along with other SQL Geeks, I had given presentation on

SQL Server 2016 Discovery Day – Data Visualization using R and Power BI.

You can find my slides using below link:

Also, we had a small solution development competition in the event where we use Power BI Desktop free tool in order to create some visualization and tell some story behind data.

Below is some quick visualization that I had created in the event by using PASS SQL Saturday real Data.

Steps for Making Data visualization by using Power BI Desktop.

Step 1: Firstly, download and install the Microsoft Power BI Desktop tool, which is free from the link below also for R related charts and play around with R codes we need to download R , RStudio IDE and install too. Below is link for Power BI Desktop, R, and RStudio IDE.

https://powerbi.microsoft.com/en-us/desktop/?gated=0&number=1

https://www.r-project.org

https://cran.r-project.org/src/base/R-3/

https://www.rstudio.com/products/rstudio/download/

Step 2: Get the data related to SQL PASS, SQL Saturday from the link below:

https://drive.google.com/file/d/0BzlPwGX6UtxUNnlfZ01KczF0NHc/view

This zip data files contains the different .sql files with data included within scripts.

Step 3: Load those scripts into SQL Server Database and then use Power BI Desktop ‘Get DATA’ tab to load the Data into Power BI Desktop and do visualization.

blog1

Alternative:  If you haven’t install the SQL server Database then you can use a single file such as ‘dbo.SQLSatSessions.Table.sql’ and then Excel file to manipulate the data in your required format.

Step 4: Load the only data part from the file ‘dbo.SQLSatSessions.Table.sql’ into excel as below

blog2

Then use Text to Columns tab in DATA tab section as below to make the data into proper column format.

blog3

Once you prepare your data as below format then now you can start analysis using Power BI Desktop.

blog4

Step 5: Use Power BI step 3 or Step 4 to load the data into Power BI. We have use step 4 here so now use Power BI to load data from Excel source then you will get data columns in right hand side as below.

blog5

Step 6:  Now start visualization using Power BI Desktop free tool, here we are using data of SQL Saturday session details and prepare different line chart, tree map, filled map , Table , R script Visual different charts that we have used to show different visualization details as below.

blog6

blog61

One can learn how to create these different charts by going through below links

https://powerbi.microsoft.com/en-us/documentation/powerbi-service-visualizations-for-reports/

Step 7: For now just for sample how we can simply create a line chart by just drag and drop feature of Power BI Desktop.

Just go to right side of Power BI Desktop and select EventDate and SQLSATURDAY filed and drop Eventdate in Axis and SQLSATURDAY into Values section and change the SQLSATURDY value as count by just clicking on SQLSATURDY field in Values section.

blog7Just simple so you can try these different charts by simply drag and drop feature.

For R script Visual you need to know some R language first in order to create R visualization chart in Power BI . One can learn R from https://www.r-project.org/

Step 8 :Publish these reports on Power BI cloud service by just clicking Publish button from Power BI Desktop , also now one can also publish these reports to publicly in web. Once you publish your report in Power BI service go to Reports -> Your Report -> File -> Publish to web . After that anyone can go through these reports publicly in the web.

Also, URL that you get through publish to web can be embedded to your website along.

Summary:

In Summary, from these above Power BI Desktop Visualization it helps to tells us that which SQL Saturday is conducted in which state, country in which year and a particular Day along with total sessions conducted. It also helps us to tell us that SQL Saturday trends year on year and mostly in September month most of SQL Saturday is conducted as per SQL Saturday data history.

Also, it helps to tell the distribution of state wise SQL Saturday conducted states .which shows most of SQL Saturday happens in North America.

How to Install Vertica in a single node.

Install Vertica in a single node. 

This article is mainly related to:

  1.  Setup of Vertica in a single node.
  2. Vertica Console Management
  3. Tableau
  4. DBeaver

During my free time, I want to try to install Vertica and want to know what it is all about. what are the things we should know while installing Vertica and what will be the issues, difficulties, requirements and process do we need to follow in order to setup Vertica and play around it.

The HP Vertica Analytic Database is based on a massively parallel processing (MPP), shared-nothing architecture, in which the query processing workload is divided among all nodes of the Vertica database.

If you want to try Vertica and play around along with this columnar database then you should follow below steps.

Step 1:

Firstly you should have any Linux OS installed in a machine. For Vertica, the minimum requirement is to have 3 nodes that mean’s three Linux OS running in different nodes. But, in my case I want to try to install in only one node and try it out. I have windows 7 OS install and where I have install Red hat Linux in my virtual machine. So, make sure you have at least one Linux OS installed machine.

You may find the below link to find out the minimum requirements and server configuration:http://my.vertica.com/docs/5.1.6/HTML/index.htm#18671.htm

General Platform Recommendations

  • ext4 is recommended over ext3 for performance reasons.
  • Use 2GB of swap space regardless of the amount of installed RAM.
  • Place the database /catalog directory on the same drive as the OS.

Step 2:

Download all the required software related to Vertica from the site https://my.vertica.com/downloads/

In order to download you can sign up in a community edition. All the stuffs you can know from below video:

http://www.vertica.com/files/myVerticaVideo/myVertica_Audio_Video_Combined_121009J.html

Here, I have downloaded the below versions:

Step 3:

After that move the setup files into Red Hat Linux virtual machine directory. I have put the setup files into directory as /data/software and then open the terminal in VM,

Run the below command as root user:

rpm -Uvh /data/Software/vertica-7.0.2-1.x86_64.RHEL5.rpm

Then after that it will ask to run the /opt/vertica/sbin/install_vertica to complete the installation.

Step 4:

Run the script in master node

# /opt/vertica/sbin/install_vertica -s host_list -r rpm_package -u dba_username

Here I used only one node so below is my script.

 /opt/vertica/sbin/install_vertica -s localhost -r /data/Software/vertica-7.0.2-1.x86_64.RHEL5.rpm -u dbadmin

Here, you need to note that if you want to install Vertica in multiple nodes then you can include different nodes or host list.

Where options are:

-s host_list comma-separated list of hostnames or IP addresses to include in the cluster; do not include space characters in the list.

  1. -r “vertica_6.0.x.x86_64.RHEL5.rpm”
  2. -u dbadmin user name
  3. -p dbadmin passowrd
  4. -P root password
  5. -L location of the license
  6. -d where data will be located
  7. -s nodes that will be part of the cluster
  8. -r location of the installation rpm

— If you omit the -u parameter, the default database administrator account name is dbadmin who will only use the admintools.

You can find more about installing vertica in 3nodes or complete cluster Installation in below link:

http://www.aodba.com/main_articles_single.php?art=83&page=vertica

Step5:

After that you will get some issues or errors as below which I have got

>> Validating node and cluster prerequisites…

Failures during local (OS) configuration for verify-127.0.0.1.xml:

HINT (S0305): https://my.vertica.com/docs/7.0.x/HTML/index.htm#cshid=S0305

TZ is unset for dbadmin. Consider updating .profile or .bashrc

HINT (S0041): https://my.vertica.com/docs/7.0.x/HTML/index.htm#cshid=S0041

Could not find the following tools normally provided by the mcelog

package: mcelog

HINT (S0040): https://my.vertica.com/docs/7.0.x/HTML/index.htm#cshid=S0040

Could not find the following tools normally provided by the pstack or

gstack package: pstack/gstack

WARN (N0010): https://my.vertica.com/docs/7.0.x/HTML/index.htm#cshid=N0010

Linux iptables (firewall) has some non-trivial rules in tables: filter

FAIL (S0150): https://my.vertica.com/docs/7.0.x/HTML/index.htm#cshid=S0150

These disks do not have ‘deadline’ or ‘noop’ IO scheduling: ‘/dev/sda1’

(‘sda’) = ‘cfq’, ‘/dev/sda3’ (‘sda’) = ‘cfq’

FAIL (S0020): https://my.vertica.com/docs/7.0.x/HTML/index.htm#cshid=S0020

Readahead size of sda (/dev/sda1,/dev/sda3) is too low for typical

systems: 256 < 2048

FAIL (S0030): https://my.vertica.com/docs/7.0.x/HTML/index.htm#cshid=S0030

ntpd process is not running: [‘ntpd’, ‘ntp’]

FAIL (S0081): https://my.vertica.com/docs/7.0.x/HTML/index.htm#cshid=S0081

SELinux appears to be enabled and not in permissive mode.

FAIL (S0310): https://my.vertica.com/docs/7.0.x/HTML/index.htm#cshid=S0310

Transparent hugepages is set to ‘always’. Must be ‘never’ or ‘madvise’.

Then go through each error in the link below

https://community.vertica.com/vertica/topics/

This is all about some cluster prerequisites. Here go through each FAIL(XXXX) in the community forum where you will get the solution for each error.

Step 6:

After that, run the admintools from the dbadmin user then you can see as

After that create the database:

Then choose the host name where database will reside:

Then, define the Catalog location and data path

Since I have installed Vertica in a single node so there will not be the concept of K-Safe method. If we are installing at least 3nodes then we can have k-safe.

After that,

Then create Vertica database as Vertica_DB in a single node.

After that the database will created.

 

Just click OK and you will see the Database configuration:

Then finally you can exit from the admintools:

Step 7:

Vertica management console:

After successful installation of Vertica Database now you can install the Vertica management console:

The new HP Vertica Management Console is an enterprise database management tool that provides a unified view of your HP Vertica database and lets you monitor multiple clusters from a single point of access.

You can find more on below link:

https://my.vertica.com/docs/5.1.6/HTML/index.htm#16773.htm

Run the command as a root user:

rpm –Uvh vertica-console-7.0.2-1.x86_64.RHEL5.rpm

After successful installation it will show URL as https://localhost.localdomain:5450/webui during the installation. Just go through the URL and accept the license

What you can do with Management Console

  • Create a database cluster on hosts that do not have HP Vertica installed
  • Create, import, and monitor multiple HP Vertica databases on one or more clusters from a single point of control
  • Create MC users and grant them access to MC and MC-managed databases
  • Manage user information and monitor their activity on MC
  • Configure database parameters and user settings dynamically
  • Access a single message box of alerts for all managed databases
  • Export all database messages or log/query details to a file
  • View license usage and conformance
  • Diagnose and resolve MC-related issues through a browser
  • Access a quick link to recent databases and clusters
  • View dynamic metrics about your database cluster

The features is so much helpful for the DBA’s and the developer from where they can easily monitor multiple HP Vertica databases on one or more clusters from a single point of control.

Detail setp to install and login setup process is shown in my slideshare , you can see the link below :

https://www.slideshare.net/anil_maharjan/how-to-setup-vertica-in-a-single-node

Tableau:

Tableau can help anyone see and understand their data. Connect to almost any database, drag and drop to create visualizations, and share with a click.

To know the story behind your data which is stored in Vertica DB, you need to have a reporting tool which can make a quick decision and helps you to get the value of your data.

This is where Tableau can be used. Which is one of the best Reporting Tool I must say.

One can go to Tableau portal and download the software and play around any database, Worksheets, excel files and so many other data files.

I have downloaded both desktop and public version of Tableau, and where as desktop version works for 15 days trail.

Now in order to connect Vertica DB through Tableau you need to install the Vertica client in your PC i.e. my windows 7 machine, where you need to download the vertica-client-7.0.2-1.64 from https://my.vertica.com/

Also, you can install the test db along with Vertica as VMART schema but in my case I am getting error so I have posted into Vertica community .One can join this community and can share ideas and issues.

https://community.vertica.com/vertica/topics/cannot-create-vmart-example-db

After that you can open tableau desktop and connect to Vertica DB in order to play around with and to find the story behind your data.

Also, you can have public tableau install in your PC and do some analysis using different worksheets, I have also done some analysis related to ‘Average Percentage of Internet Users across the World ‘ and published into public server which is free that Tableau providing for normal users.

https://public.tableausoftware.com/views/AveragePercentageofInternetUsersaccrostheWorld/AverageofIndividualsUsingtheInternet?:showVizHome=no#1

DBeaver:

DBeaver is free and open source (GPL) universal database tool for developers and database administrators.

  • Usability is the main goal of this project, program UI is carefully designed and implemented.
  • It is freeware.
  • It is multiplatform.
  • It is based on opensource framework and allows writing of various extensions (plugins).
  • It supports any database having a JDBC driver.
  • It may handle any external datasource which may or may not have a JDBC driver.
  • There is a set of plugins for certain databases (MySQL and Oracle in version 1.x) and different database management utilities (e.g. ERD

You can find more from the link :

http://dbeaver.jkiss.org/about/

I just found this tool so easy to connect with Vertica DB and do some queries analysis

Summary:

Through this article, I am sure you are now able to understand how you can install Vertica Analytic Database in a single node and how you can use Vertica Management Console to monitor multiple clusters from a single point of access. How you can easily get the story behind your data in Vertica DB by using the Tableau Reporting tool.

Also, how you can use quires to do more detail analysis by using the DBeaver tool in Vertica Database.

More detail step by step full artice can be download from the link below :

https://www.slideshare.net/anil_maharjan/how-to-setup-vertica-in-a-single-node

Thanks,

Anil Maharjan

BI Engineer

http://np.linkedin.com/in/maharjananil

Cross joins across user defined hierarchies aren’t supported .?

Hello all,

It’s not appropriate to post as this a blog but it might help to most of us in field of BI having similar kind of confusion /issue.

I have some confusion on crossjoin function within MDx.

while I try to crossjoin the different level sets of same Hierarchy. It shows error as

For example.

‘The Customer Geography hierarchy is used more than once in the Crossjoin function.’

select {{[Customer].[Customer Geography].[Country].&[United States]}*

{[Customer].[Customer Geography].[State-Province].members}} on 0

FROM [Adventure Works]

WHERE Measures.[Internet Sales Amount]

Cannot we Cross joins across user defined hierarchies ,or they aren’t supported .?

Coz I really need to implement as above MDx within my real Cube.

I try to implement by making as another Hierarchy Member but it doesn’t gives the value result as what we want/need.

with member [Customer].[Country].[United States ]as [Customer].[Customer Geography].[Country].&[United States]

select {

{[Customer].[Country].[United States ]}*

{[Customer].[Customer Geography].[State-Province].members}} on 0

FROM [Adventure Works]

WHERE Measures.[Internet Sales Amount]

Though while looking on Adventure Works Cube ,it has attribute as Country, State-Province, and others along with the User define Hierarchy as Customer Geography with in a Customer Dim.

So while we crossjoin in the members of attribute of same Customer  dimension it gives result

SELECT

{      {[Customer].[Country].[United States]}*

[Customer].[State-Province].Members

} ON 0

FROM [Adventure Works]

WHERE Measures.[Internet Sales Amount]

But similarly, while we try to crossjoin with in the attribute members of user define Hierarchy from  Customer Geography it gives error.

Does that mean we need to create some dummy/hidden attributes for that particular dimension in other to obtain the solution for my case. ?-may be this is the solution

Does this mean we cannot crossjoin across user defined hierarchies?

It is just an example ,I simply need like this within from same user defined Hierarchy. By defining as user define calculated member of different attribute/dimension I am able to obtain the output as below but the values doesn’t valid or match.

with member [Customer].[Country].[United States ]as [Customer].[Customer Geography].[Country].&[United States]

member [Customer].[State-Province].[Alabama ]as [Customer].[Customer Geography].[State-Province].&[AL]&[US]

select

{Measures.[Internet Sales Amount]} on 0,

{{[Customer].[Country].[United States ]}*

{[Customer].[State-Province].[Alabama ]}*

{[Customer].[Customer Geography].[Postal Code].members}} on 1

FROM [Adventure Works]

Country State-Province Postal Code(Can be Top 10 members) Internet Sales Amount Other Measures
United States Alabama

2015

$9,389,789.51

111

2450

$9,389,789.51

222

2010

$9,389,789.51

333

Is there any alternative solution for this kind of issue .Hope someone have some idea regarding on this.

I also posted on MSDN as link

http://social.msdn.microsoft.com/Forums/en-US/sqlanalysisservices/thread/f03cd36e-8a4f-4ae7-a4b4-fb6f499dc9b9

Thanks,

Anil Maharjan

Generate possible combination of MDx automatically by just pointing the SSAS cube Catalog.

Hello all,
After such a long time , I would like to post some another blog which might be helpful to the one who works in BI mainly in SSAS/MDx.
I simply need to generate some possible combination of MDx automatically by just pointing the SSAS cube Catalog.
In order to got my solution I had research a lot and also post the request in MSDN as well as BIDN forum and got some help through my MSDN post by Hari .
For getting the solution ,we simply need to create the SSIS package as below.
Following are the steps below.
1. Build new SSIS package and in data flow task.
2. Add script component with Source setting as the first task.
3. Paste the code shown in the code shown below. This code takes as input the SSAS server and Database name. This is iterate through the dimensions and base measures. For each base measure it identifies all the corresponding dimension. For each dimension it will identify the attribute which is mapped to the measure group and attribute directly relating to it. Output of this task is dimension, cube, attribute, basemeasure.
Note: Add the reference ‘ Microsoft.AnalysisServices.dll ’ in your code which can be found in the server where you have SSAS installed .i.e. C:Program FilesMicrosoft SQL Server90SDKAssemblies
String strCube, strDimension, strAttribute, strMeasure, strDatabase;
Server s;
CubeCollection cc;
MeasureGroupCollection mgc;
MeasureGroupDimensionCollection mgdc;
MeasureGroupAttributeCollection mgac;
RegularMeasureGroupDimension rmgd;
ReferenceMeasureGroupDimension rfmgd;
MeasureCollection mc;
strDatabase = “Adventure Works DW 2008”;//add catalog here
s = new Server();
s.Connect(“ANILMAHARJAN”);//add server here
cc = s.Databases[strDatabase].Cubes;
foreach (Cube c in cc)
{
strCube = c.Name;
mgc = c.MeasureGroups;
foreach (MeasureGroup mg in mgc)
{
mgdc = mg.Dimensions;
mc = mg.Measures;
foreach (MeasureGroupDimension mgd in mgdc)
{
strDimension = mgd.CubeDimension.Name;
if (mgd.GetType().Name == “RegularMeasureGroupDimension”)
{
rmgd = (RegularMeasureGroupDimension)mgd;
mgac = rmgd.Attributes;
}
else if (mgd.GetType().Name == “ReferenceMeasureGroupDimension”)
{
rfmgd = (ReferenceMeasureGroupDimension)mgd;
mgac = rfmgd.Attributes;
}
else
break;

foreach (MeasureGroupAttribute mga in mgac)
{
if (mga.Type.ToString() == “Granularity”)
{
strAttribute = mga.Attribute.Name;
foreach (Measure m in mc)
{
strMeasure = m.Name;
AttributeRelationshipCollection arc;
arc = mga.Attribute.AttributeRelationships;
int iFlag = 1;
GetRelatedAttribute(iFlag, strDatabase, strCube, strMeasure, strDimension, mga.Attribute);
}
break;
}
}
}
}
}
}
public void GetRelatedAttribute(int iFlag, String strDatabase, String strCube, String strMeasure, String strDimension, DimensionAttribute att)
{
String strAttribute;
AttributeRelationshipCollection arc;
arc = att.AttributeRelationships;
strAttribute = att.Name;
CubeMetadataBuffer.AddRow();
CubeMetadataBuffer.Database = strDatabase;
CubeMetadataBuffer.Cube = strCube;
CubeMetadataBuffer.Dimension = strDimension;
CubeMetadataBuffer.Attribute = strAttribute;
CubeMetadataBuffer.Measure = strMeasure;
iFlag = 0;

if (arc == null)
{
return;
}
else
{
foreach (AttributeRelationship ar in arc)
{
GetRelatedAttribute(iFlag, strDatabase, strCube, strMeasure, strDimension, ar.Attribute);

}
}

}
}
4.Add second script component with Transformation setting with coding as
String strCube, strMeasure, strDimension, strAttribute, strMDXQuery;
strCube = “[“+Row.Cube+”]”;
strDimension = “[“+Row.Dimension+”]”;
strAttribute = “[“+Row.Attribute+”]”;
strMeasure = “[Measures].”+”[“+Row.Measure+”]”;
strMDXQuery = “select nonempty(” + strDimension + “.” + strAttribute + “.” + strAttribute + “.members,” + strMeasure + “) on rows, {” + strMeasure + “} on columns from ” + strCube + “;” + “rn”; Row.MDXQuery = strMDXQuery;

5.Store those outputs within the table or any file for further propose and also we can add user calculated sets and Members within the MDXQuery. which can be very helpful for other purpose i.e. to capture performance statistics.
Snapshot of SSIS Package

Some output sample MDx Queries
select nonempty([Promotion].[Promotion].[Promotion].members,[Measures].[Internet Sales Amount]) on rows, {[Measures].[Internet Sales Amount]} on columns from [Adventure Works];
select nonempty([Promotion].[Min Quantity].[Min Quantity].members,[Measures].[Internet Sales Amount]) on rows, {[Measures].[Internet Sales Amount]} on columns from [Adventure Works];
select nonempty([Promotion].[End Date].[End Date].members,[Measures].[Internet Sales Amount]) on rows, {[Measures].[Internet Sales Amount]} on columns from [Adventure Works];
select nonempty([Promotion].[Start Date].[Start Date].members,[Measures].[Internet Sales Amount]) on rows, {[Measures].[Internet Sales Amount]} on columns from [Adventure Works];
Hope this post will be useful for all of us in the field of BI.
Regards,
Anil Maharjan