Monday, June 22, 2015

Programming: Remove Empty Rows in a DataTable when importing from Excel

Background

Simple bug a tester found on a bulk importing application that was using OLEDB to scrape an Excel Spreadsheet (notorious Sheet1$). Googling showed some answers on Stack Overflow however they all ended up using a separate list which didn't fit our problem. We need to set the PrimaryKey field of the the DataTable later in our code.

Problem Space 

How does one delete the empty rows in the DataTable Rows collection itself?

Solution Space

Use Remove() in conjunction with a LINQ query to get all the empties as object references.


   
    //Any row that is has an empty string in the crucial column "Primary Key" needs to be removed.
    var empties = table
        .Rows
        .Cast<DataRow>()
        .Where(
            row => string.IsNullOrEmpty(row[primaryKey].ToString())
        ).ToList();

        foreach (var empty in empties)
        {
            //Empties begone!
            table.Rows.Remove(row: empty);
        }

        var rows = table.Rows.Cast<DataRow>().ToList();
        foreach (var row in rows)
        {
            //You're code goes here!
        }

Sunday, June 14, 2015

Upgrade: VHD files to VHDX file format

Background

In a previous article I had migrated a SQL Server that made use of iSCSI targets on Windows Server 2008 R2. The destination server was a brand new Windows Server 2012 R2 box and along with it came the new format of VHDX.

According to the MSDN Overview of VHDX:
The VHDX format provides the following features::
  1. Support for virtual hard disk storage capacity of up to 64 TB.
  2. Protection against data corruption during power failures by logging updates to the VHDX metadata structures.
  3. Improved alignment of the virtual hard disk format to work well on large sector disks.
  4. Larger block sizes for dynamic and differencing disks, which allows these disks to attune to the needs of the workload.
  5. A 4-KB logical sector virtual disk that allows for increased performance when used by applications and workloads that are designed for 4-KB sectors.
  6. The ability to store custom metadata about the file that the user might want to record, such as operating system version or patches applied.
  7. Efficiency in representing data (also known as “trim”), which results in smaller file size and allows the underlying physical storage device to reclaim unused space. (Trim requires physical disks directly attached to a virtual machine or SCSI disks, and trim-compatible hardware.)

Lets look at the ROI

  1. Neutral - Datacenter guys rejoyce! For me in the home lab well I am actually not seeing to much here. Even if I maxed out a standard Dell MD3000i SAN I would at best net 45TB of the 64TB as a single file. 
  2. Positive - This is a bigger win for the smaller labs like mine because not every lab is blessed with an UPS especially one that will meet a SLA[A]. Even with a RAID Controller Battery power outages can still harm buffer caches prior to the write-back cache in Server 2012!
  3. Positive - Every backup software will benefit from this going to mechanical disks.
  4. Neutral Positive - Data Centers and VDI deployments I am sure will love this. At first I put Neutral however it dawned on me that my Developer Hyper-V VMs can take advantage of this feature since they 1) work on 4KV NTFS aligned disks and 2) can make use of differencing disks. If I get enough requests I will do a posting on the differencing disks.
  5. Positive - Pretty much every Hyper-V Guest will benefit from this in the Windows environment. Even in my case the majority of the VHDs I have deployed are VM OS disks as well as Drive Shares.
  6. Neutral - This is for the bigger guys again. SCVMM would be the first thing coming to mind that will gain the primary advantage specific to this feature.
  7. Winner - Biggest ROI right here. The "Trim" the MSDN is referring to is at the hardware level. SSDs, SCSI, iSCSI, to name a few, can now accept the TRIM command directly, pass-through, allowing the hardware to more efficiently keep its resources in order. Double check your hardware for TRIM support.


Problem Space

VHD format itself is deprecated in favor of VHDX. In this instance the VHD files are being used for iSCSI targets that another Hyper-V server leverages. The iSCSI wizard provides no option for using a VHD file so therefore we have no choice but to upgrade.

Solution Space

You have two (2) viable solutions. Before beginning either of them make sure to backup the VHD file to another secure location!

First is to use PowerShell: Convert-VHD cmdlet.
PS C:\> Convert-VHD –Path c:\test\testvhd.vhd –DestinationPath c:\test\testvhdx.vhdx

Second is to use the Hyper-V Disk Wizard. Just follow the green boxes.






Note: I leave the disk sizing strategy to your use case.








Now we wait. When it concludes you can simply delete your disk although I recommend saving the file for seven (7) to fourteen (14) days as a "just-in-case".

Conclusion

Post Mortem

I could have saved time if I would have shrunk the VHD itself prior to conversion.

Fun Facts

[A] Most standard SLA's guarantee an uptime of 8760 hours, 365 days, with only 24 hours of allowed downtime per year. If your UPS is beefy enough to cover the 24 hours of downtime then you can considered your SLA an aggressive policy. You do see these in Data Centers that feature a diesel backup generator that powers Industrial Battery Cells in Series-Parallel configurations. These in turn are connected to the main PoE electric panels in that a UPS will plug into.

Saturday, June 13, 2015

Migration: VMware 5.5.0 to Hyper-V 2012 R2

Background

This has been a long overdue posting. I started with ESXi 4.1 back in Fall of 2010 and I was really satisfied with the features offered. For years I had advocated sticking to ESXi over Hyper-V when it arrived. In Spring 2013 due to political limitations with one of my valued clients I was forced to use Hyper-V, on Server 2008 R2, for any virtualization. As the largest Healthcare Insurance Company in the US I had informed them of all their options with vmware, they even laughed at the cost since they had the resources to do whatever they wanted. Hyper-V it was! I was shocked at how much the product had improved; Hyper-V had come a long way in it's technical offerings.

The biggest delimiter now, which VMware has failed to address, is licensing. At the end of the day I am a professional, a business man. I have no qualms using the right tool for the job and paying for it when it is warranted; Microsoft won that point fair and square. Change is hard though so while I succeeded with using Hyper-V at work, I kept VMware in my home lab.

Sadly things did not go so well as time went on; it was time for a change. The upgrade to 5.5 was not as painless as other upgrades had been however what truly irked me as a consumer was this:
No new features? After all... that... upgrading?!
Not cool. I went through this whole hassle just for a bunch of features to not be available... Luckily I was still able to get the SSD host caching setup but that was a far cry from feeling victorious. I tried to put myself in their shoes however it didn't make any sense; perhaps one day I will post that debate. Still I found myself holding on, resisting the urge of "change".

When I purchased a Dell PowerEdge R905 would be the final nail in the coffin for VMware in my home lab. I read through the licensing agreement and found this gem:
As of vSphere 5.5, these provisions apply to all paid editions of vSphere, vSphere with Operations Management, and vCloud Suite:
  • vSphere 5.5 is licensed on a per-processor basis (per-CPU).
  • Each physical processor (CPU) in a server must have at least one vSphere 5 processor license key assigned to be able to run vSphere.
  • There are no restrictions on physical cores or physical RAM.
  • There are no restrictions on the number of virtual machines that can run on each properly licensed vSphere 5.5 processor.
  • You can configure up to the maximum number of vCPUs per virtual machine as specified in the vSphere 5.5 Configuration Maximums Guide.
Licensing for VMware vSphere 5.5
The cost to cover the new server licensing was a staggering $ 2,875.00 USD... A pretty big jump from free! Hyper-V on the other hand has no restrictions on the number of sockets the license covers all of them. In my lab I run all Windows boxes as it is so this was a no brainer. Unlike my clients I like to pinch pennies where I can coupling that with Windows Server 2012 R2 sweetened this deal. The Datacenter version you can have unlimited VMs as long as you have a guest OS license; queue the Gregorian Hallelujah and enter the MSDN.


Problem Space

Inventory Concerns

We need to look at the current physical host and identify the following:
    1. How many Guest Machines are involved? In our situation there is only one (1) left as the Active Directory was already moved prior to this post.
    2. How many datastores does the Server currently host? In our situation it has 2 separate RAID1 arrays which each house 1 VMFS datastore. These RAID arrays will not be moved instead they will be re-purposed to SAN storage after they are formated from VMFS to NTFS as iSCSI Targets.
      1. First RAID1 is 250GB capacity and holds the VMware Guest OSes. This will serve as the main storage for the Host OS ie Windows. All that is required is taking a backup of the VMware configuration and disk. After that the filestore can be emptied and formated to NTFS.
      2. Second RAID1 is 1.5TB capacity and holds general file share data as well as a few VHD containers that are used by the Hyper-V as a SAN. To migrate this out of the VMware filesystem the data will be staged to a 4TB drive as a copy. The contents of the logical arrays can them be formatted and moved back after the OS is installed.
    3. Is a staging/transfer platform available to move the data off the server? In the event something goes south I want the option of having all my data somewhere away from the disaster zone. I prepared a 4TB drive to house the migration data. Carbonite already has my data off-site that I can get back through them just in case everything goes south.

Guest Machine Settings Considerations

Lets not overlook any specialized configuration for the migration at hand. I strongly recommend circling back to your administrators on any checklists they may have to consider during migrations. If you are the administrator I strongly recommend you create documentation such as a checklist of common steps to take when migrating a server.
    1. How many instances of SQL Server is installed? In our case we have 1 instance.
    2. Are any instances deployed with TDE (Transparent Data Encryption)? This can be a bit of a trick question as TDE will not cover scenarios instances that have FileTable enabled; more specifically anything that has Filestream enabled which FileTable is built on top of :) In other words TDE only encrypts the Data and Log files. The Filestream/FileTable data which is appended to the backups are NOT encrypted as they are done after the fact. This is true in our situation however you can still enable TDE if you like. TDE was deployed prior to the FileTable being deployed so the Keys MUST be secured beforehand so by the time the backup is restored that data will be accessible or face being locked out. Finally not all Databases in an instance need to have TDE deployed/enabled. 
    3. Does the guest machine have any Certificates that need to be exported? Important question to ask if you want the migration to work. :) In this case we do have certificates from the TDE deployment on a single database.
    4. Does the Guest Machine host any iSCSI Initiators or iSCSI Targets? For really scaled out SQL server instances or clustered SQL servers this is definitely a required consideration.
    5. Are there any Static IP addresses? In this case no, that is something that I had moved away from by taking advantage of Reservations in DHCP.
    6. Do you have a full backup as well as the differential backups? This better be a resounding Y E S! :D
    7. Was a backup made of the FileTable files? This is part of your backup plan! Let's take advantage of that non-transactional access ;)
    8. Are any 3rd party tools involved with backing up and/or maintenance? Yes I use Ola Hallengren Scripts for all of my clients, including myself.
    9. Will the migration require any conversions? In this case yes. The VHD files will be upgraded to VHDX format.

Solution Space

Inventory

Let's address issue number (1) first by taking an overview of the hardware and systems that will be involved. 

New Destination Server - Dell PowerEdge R905:
  1. Case: 4U Rack mountable
  2. CPU: 4x AMD Operton 64 8389 @ 2.89Ghz Quad-Core
  3. RAM: 32GB DDR3 ECC 677Mhz
  4. HDD: 4x 3.5" 15K Seagate 146GB drives in a single RAID-10. Raid-10 Vs Raid-01
  5. Controller: Dell Perc 6/i w/ Battery
  6. Host OS: Windows Server 2012 R2 Data Center
  7. NIC: 1x Broadcom Netxtreme II BCM5708C. 4x ports total (2x onboard and 2x through daughter riser card). 10Gbe capable. If you had issues with network performance with this card in Hyper-V please see my other article: Troubleshooting Broadcom BCM5708C in Hyper-V 2012 and above
The current rig hosting VMware:

Current Source Server - Dell PowerEdge T100:
  1. Case: Mid-Tower
  2. CPU: Intel Xeon E3110  @ 3.00GHz Dual-Core (No Hyperthreading)
  3. RAM: 8GB DDR3 ECC 800Mhz
  4. HDD: 2x 2.5" 7.2K Seagate 250GB drives in a single RAID1. 2x 3.5" 7.2K Seagate 1.5TB drives in a single RAID1.
  5. Controller: 3ware 9650SE
  6. Host OS: Windows Server 2012 R2 Standard
  7. NIC: 1x Intel 1x Gigabit on PCIe and 1x onboard Gigabit port. 2 ports total.
Staging Server - Generic Workstation with high capacity drive.
  1. Case: Mid-Tower
  2. CPU: AMD Phenom X4 9750 @ 2.40GHz Quad-Core
  3. RAM: 8GB DDR2 1800Mhz
  4. HDD: 1x 2.5" OCZ Agility 3 SSD. 1x 3.5" 7.2K Seagate 3.5TB drive.
  5. Controller: NA
  6. Host OS: Windows 7 Ultimate x64
  7. NIC: 1x onboard Gigabit port. 

Secondly we now have a high level overview of the staging area for the migration as well as the datastores involved; concerns 2 and 3.

Migration Preparations

iSCSI Targets

In order to get a better understanding let's zoom in on the DataStore issue. Right now the VMFS file system, colored orange in the diagram below, is inbetween the RAID layer and the NTFS layer. In order to have a successful migration we need to move all of the Yellow and Green storage items. For my migration copying at the NTFS layer won't help as the metadata stored is faked by the VMFS system. Restoring that information on top of the RAID layer will simply cause inconsistencies.

Looking at the diagram further we have VHD's that serve as the virtual disks belonging to the iSCSI targets hosted by one of the guest SQL Server. Let's take those offline so we can migrate them to the staging area.

Sign into the server that has references to the iSCSI targets (Initiators). 

Navigate to File and Storage Services -> Volumes -> Disks. Right click each disk and select "Take Offline". When you do that you should get a prompt; click Yes:


Next open the iSCSI initiator properties. For each target click the Disconnect button until all the targets are Inactive.


Now that we have taken care of the iSCSI consumer/client/initiator :) Let's change over to the host server. The process is a bit simpler. Dismount each virtual disk. Next delete the targets, which should be in the status of Idle by now. If it is not Idle check to see if it is locally mounted.


Next disable each virtual disk.


Next open SQL. We have to extract the certificate in order to be able to restore the backups we took at the beginning. 

USE [master]
GO

-- Locate Databases that have TDE
/*
Indicates whether the database is encrypted or not encrypted.
0 = No database encryption key present, no encryption
1 = Unencrypted
2 = Encryption in progress
3 = Encrypted
4 = Key change in progress
5 = Decryption in progress
6 = Protection change in progress (The certificate or asymmetric key that is encrypting the database encryption key is being changed.)
*/
SELECT DB_NAME(database_id), *
FROM sys.dm_database_encryption_keys
GO

-- View Certificates 
SELECT * FROM sys.certificates
GO

-- Create backups
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'YourPassWordStringThatShouldNotEvenReallyBeWordsMoreLikeARandomSequenceOfVariousCharacters'
GO

BACKUP MASTER KEY TO FILE = 'C:\Master.Key.bak' ENCRYPTION BY PASSWORD = 'YourPassWordStringThatShouldNotEvenReallyBeWordsMoreLikeARandomSequenceOfVariousCharacters'
GO

BACKUP CERTIFICATE Z10ConsultingTDECertificate TO FILE = 'C:\Z10ConsultingTDECertificate.cer.bak' WITH PRIVATE KEY
(
 ENCRYPTION BY PASSWORD = 'ThisisNearlyAThrowAwayKeyThatIsonlyNeededToBeUsedIfYoureRestoringFromABackupSoYeaKeepItSafe', 
 FILE = 'C:\Z10ConsultingTDECertificate.key.bak'
)
GO

ALTER DATABASE Z010DB SET ENCRYPTION OFF;
GO

WHILE ((SELECT encryption_state FROM sys.dm_database_encryption_keys WHERE DB_NAME(database_id) = 'Z010DB') <> 1)
BEGIN
 PRINT('Waiting for the encryption state to change')
END
GO

USE Z010DB
GO

DROP DATABASE ENCRYPTION KEY;
GO

USE [master]
GO

-- Drop Old Keys & Certificates
DROP CERTIFICATE Z10ConsultingTDECertificate
GO

DROP MASTER KEY 
GO

-- Test Backup Restores
RESTORE MASTER KEY FROM FILE = 'C:\Master.Key'
DECRYPTION BY PASSWORD = 'YourPassWordStringThatShouldNotEvenReallyBeWordsMoreLikeARandomSequenceOfVariousCharacters'
ENCRYPTION BY PASSWORD = 'YourPassWordStringThatShouldNotEvenReallyBeWordsMoreLikeARandomSequenceOfVariousCharacters'

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'YourPassWordStringThatShouldNotEvenReallyBeWordsMoreLikeARandomSequenceOfVariousCharacters'
GO

OPEN MASTER KEY DECRYPTION BY PASSWORD = 'YourPassWordStringThatShouldNotEvenReallyBeWordsMoreLikeARandomSequenceOfVariousCharacters'
GO

CREATE CERTIFICATE Z10ConsultingTDECertificate FROM FILE = 'C:\Z10ConsultingTDECertificate.cer' WITH PRIVATE KEY
(
 FILE = 'C:\Z10ConsultingTDECertificate.key',
 DECRYPTION BY PASSWORD = 'ThisisNearlyAThrowAwayKeyThatIsonlyNeededToBeUsedIfYoureRestoringFromABackupSoYeaKeepItSafe'
)
GO

-- Put encryption back on 
USE Z010DB
GO

CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE Z10ConsultingTDECertificate
GO

ALTER DATABASE Z010DB SET ENCRYPTION ON;
GO

IF((SELECT encryption_state FROM sys.dm_database_encryption_keys WHERE DB_NAME(database_id) = 'Z010DB') = 3) BEGIN
 PRINT('Encryption has been successfully deployed')
END

-- Create backups
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'YourPassWordStringThatShouldNotEvenReallyBeWordsMoreLikeARandomSequenceOfVariousCharacters'
GO

BACKUP MASTER KEY TO FILE = 'C:\Master.Key' ENCRYPTION BY PASSWORD = 'YourPassWordStringThatShouldNotEvenReallyBeWordsMoreLikeARandomSequenceOfVariousCharacters'
GO

BACKUP CERTIFICATE Z10ConsultingTDECertificate TO FILE = 'C:\Z10ConsultingTDECertificate.cer' WITH PRIVATE KEY
(
 ENCRYPTION BY PASSWORD = 'ThisisNearlyAThrowAwayKeyThatIsonlyNeededToBeUsedIfYoureRestoringFromABackupSoYeaKeepItSafe', 
 FILE = 'C:\Z10ConsultingTDECertificate.key'
)
GO

BACKUP DATABASE Z010DB TO DISK = 'C:\Z010DB_ExportableBackup_20150607.Bak'
WITH
 COMPRESSION, 
 NOFORMAT, 
 STATS = 10
GO

With our Certificates and Keys safely in hand we can stop SQL and shut down the Guest OS.


By this point all the data of the host VMware server has been cleaned off including the backup taken of the Vmware configuration and disk. In our Overview Diagram everything that was on Drives A and B have been re-located to C. Shut down the Vmware server and get ready to format the system. We have all of our Keys and Certificates too. Time to Nuke VMware!


Migration

We need to ensure the destination server has adhered to the Prerequisites for FileTable. I decided to do this during the backing up process of the current server. The most critical part is the firewall control, luckily the MSDN has good coverage on that; open ports 139 and 445.







If you get an access denied error it is more than likely that you forgot to add the Windows Login of that user to the SQL Logins on the new server. Additionally since FILESTREAM doesn't use windows share permissions you will need to grant roles such as DBReader to the login under User Mappings. If you have a large number of users please default to using a group instead of individual users.

Let's go ahead and import the Keys and Certificates. The script from before includes that. After that is done we simply restore the backup and that is the moment of truth!




YES!!! :D

Ok so at this point we have a now formatted VMware server and a successfully migrated SQL server sitting in it's new Host under Hyper-V on Windows Server 2012 R2 DataCenter. I spun up a new guest OS of Windows 2012 R2 Standard as this box will be the iSCSI SAN box. Please don't call it SANdBox or the community may hit you. ;)

After enabling Nic Teaming and joining the new server to the domain it is time to transfer the files and VHDs back to the 1.5TB RAID array in which they came.




I got to say I was pretty dang happy getting 104 MB/s! No doubt I am getting my money's worth on the teamed NIC cards.

Conclusion

Post Mortem

One thing I truly wish I could have done differently was to have a more resilient staging area for the Data Migration. The 4TB drive was a single point of failure in this process. Even with Carbonite I would have been out for days while waiting to get my data back plus having to redo the whole plan; calculated risk that could have been mitigated further if you will.

I hope you found this article helpful in moving to Microsoft's Hyper-V platform. I wish you the best of luck in your own endeavors!

References

Thursday, July 18, 2013

Useful insight on NTLM Windows Authentication in Firefox

This has been one of those issues that often crops up during development. While I agree that most enterprises have IE which can be configured to handle integrated windows authentication I still insist that I test a website in as many browsers as I can. Using Default Browser Switcher in Visual Studio 2012 (you can find it in the Extension Manager in the Online section) makes this chore easier to boot.


Sources:
Adding a trusted webserver to NTLM whitelist.

Tuesday, February 12, 2013

Dropping FullText in SQL Server

Many months ago I had watched a Tech Ed demo by Bob Beauchemin, an active blogger over at SQL Skills on how to setup FileTable in SQL Server 2012. Links to Part 1 - Filestreams and filetables in SQL Server 2012 and to Part 2 - Integrating SQL Server Filetables, Property Search, and FTS/Semantc Search.

Before going any further I want to ensure 100% credit to Bob's wonderful keen insight on FileTable, FileStream and SQL Server.

The issue at hand:
Oddly enough I had a warning when altering my FileTable in SQL Server 2012 today. Normally it is a simple process of two commands:
Unable to modify table. Table contains at least one fulltext index. 
Ok so when all else fails use T-SQL

DROP FULLTEXT INDEX ON MYDB..MyFileTable
GO
DROP FULLTEXT CATALOG MyFileTableFullTextCatalog
GO

Ran the commands and got the same error. This happened because I shared an Index pre-made by SQL Server when I built the FileTable which Bob explains in more detail in his post.


stream_id unique index.
No big deal...I saw that one coming at least. Apparently when SSMS is making changes to a table it creates a temp table and places a session lock on the table. Fair enough. Rebooted the instance ran the commands above and problem solved.

The Reason:
This is to get around a defect with Entity Framework not having compatibility with HeiarchyId as a  SQL Data Type. The EF team stated in their blog that EF6 probably won't have support for this data type either. EF will support a read only view though, say something like this:

USE [DB]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE VIEW [dbo].[MyFileTable]
AS
----
SELECT
MyFT.stream_id,
--Transform and omit path locator and path locator parent.
FileTableRootPath() + MyFT.file_stream.GetFileNamespacePath() AS FileNamespacePath, 
MyFT.name, 
-- Omit these two fields to strings
--MyFT.path_locator,
--MyFT.parent_path_locator, 
-- Remaining normal fields.
MyFT.file_type, MyFT.cached_file_size, MyFT.creation_time, MyFT.last_write_time, 
MyFT.last_access_time, MyFT.is_directory, MyFT.is_offline, MyFT.is_hidden, MyFT.is_readonly, MyFT.is_archive, MyFT.is_system, MyFT.is_temporary
FROM MyFileTableNameInDbo AS MyFT
----
GO

For the moment this will work. I will expand on this after I give it more thought. Perhaps it would be better to use a AFTER UPDATE Trigger to a real USER_TABLE with just the stream_id (PK), the path_locator as a string via FileNamespacePath in the script above. Finally I will include the name of the file. This way I can use foreign key constraints (which views do not support).

This new "real" table can only be updated by this trigger. The downside is that the EF will not support the additional features offered by FileTable (yet!)

Tuesday, February 5, 2013

Part1: Migrating from EF 4.0 to EF5.0

Warning! Multiple blog post article. I would advise not proceeding until all the articles are done.

Since it is a patch Tuesday for me I figured it would be a great time to upgrade my EF instance from 4.0 to 5.0. Please note these are my own musings.

Prerequisites: Install .Net 4.5 on Windows 7 available at Microsoft's website: here. Patch your rig with latest windows updates Watch progress bars (optional) Drink red bull (recommended) Verified all files are checked into TFS Made full backup of SQL Server (Both the TFS DB and the website DB).

While I was waiting I decided to look for a new visual style for VS2010 since the monitors on my desk were always using a light background leading to headaches and eye strain. I ended up settling on "The Dark Side of Visual Studio" which you can download yourself: here. Full credit to InfinitiesLoop for creating this awesome theme.

32 minutes later... More rebooting and another 16 minutes of patching.

Installing Visual Studio 2012 which requires almost no explanation (just lots of moving dots while it installs). Swell.

If your using a TFS open visual studio 2010, copy your settings and port them to 2012. Your local folder mapping should remain the same.

Start upgrading your project properties and select 4.5 as your framework target. In your web.config go to the compilation element targetFramework=”4.0″ and replace it with 4.5.
I also added httpRuntime targetFramework="4.5" right under system.web and before compilation.


Now for the entity framework. More than likely once you did this a nice warning popped up:
Warning: Old school doesn't like the new school kids.


Lets add the new 5.x .tt to generate our new functionality so once we are done we can get rid of the old one. You backed up your TFS before this right?
Rename your current 4.x with an "Old"  suffix in case things don't work out.
Update the $edmxInputFile$ variable with the name of your edmx model.
In my case I used Z10.edmx

Next perform the same steps on XXX.Context.tt. Below our two sample screenshots I saw in my application.

Sweet! I have all of my new DbSet<T> objects. 
Due to my design I had implemented my custom logic via partial classes in a separate file (called ModelExtensions.cs). With this methodology the template engine can regenerate as much as it sees fit without killing my extensions and thus I can remove the Old Template with out feeling guilty (you did make a backup right?). Scalpel.

This is the critical part. In my striving effort to make this upgrade I want to start making sure that I will implement an actual repository. Before I hadn't bothered setting one up since it was a small application I had made for my personal use. Now it is time to get my refactor on.

Wednesday, January 30, 2013

Its 2013...time for a blog!

After assessing various blogging engines such as BlogEngine.Net, Subtext, WordPress (IIS+MS SQL Server) and weighing the features I was looking for I finally settled on a solution that I would not host myself. After realizing what I wanted my primary website, https://www.z10software.com, to achieve separating the blog away from it became obvious. Now I can develop against the site and share my various musings, gleanings if you will, here.

A sneak peek of topics to be covered in future (In ad-hoc order):

  • SQL Server FileStream and using EFS.
  • Integrating FileStream in EF5.0 utilizing WCF services.
  • Migrating from EF4.x to EF5.0.
  • Adding Cubes in SSAS using a DB other than Northwind and AdventureWorks (due to their nature they sometimes don't reflect the needs of other companies).
  • Implement a simple FTPS solution for use with SSIS.
  • Building a Lab
    • Deploying a Domain Controller
    • DHCP, DNS and a hint of AD DS.
  • More! (Hey I told you it was a sneak peek, not a road map!)

Stay tuned.