Quantcast
Channel: Geekswithblogs.net
Viewing all 6441 articles
Browse latest View live

Kaizen Robotics Program–Honorable mention


The Case of Slow WPF Rendering in a WinForms Application

$
0
0

Originally posted on: http://geekswithblogs.net/akraus1/archive/2016/02/27/172993.aspx

I had an interesting case where a new WPF control was added to a legacy WinForms application. The WPF control worked perfectly in a test application but for some strange reason it was very slow in final WinForms application where it was hosted with the usual System.Windows.Forms.Integration.ElementHost. The UI did hang and one core was always maxed out. Eventually it built up after some minutes but even simple button presses did cause 100% CPU on one core for 20s. If you have high CPU consumption the vegetative reaction of a developer is to attach a debugger and break into the methods to see where the issue is. If you use a real debugger like Windbg you can use the !runaway command to find the threads with highest CPU usage

0:006> !runaway
User Mode Time
  Thread       Time
   0:368       0 days 0:00:11.625
   4:13a0      0 days 0:00:06.218
   6:301c      0 days 0:00:00.000
   5:21c8      0 days 0:00:00.000
   3:3320      0 days 0:00:00.000
   2:20e4      0 days 0:00:00.000
   1:39a0      0 days 0:00:00.000

but when I tried to break into the I was always just waiting for window messages:

# ChildEBP RetAddr  Args to Child             
00 0018f0d0 5cb1e13a dcd9ca4a 72f7a424 0018f370 USER32!NtUserWaitMessage+0xc
01 0018f154 5cb1db39 00000000 ffffffff 00000000 System_Windows_Forms_ni+0x1ae13a
02 0018f1a8 5cb1d9b0 024d09e4 1bda0002 00000000 System_Windows_Forms_ni+0x1adb39
03 0018f1d4 5cb06129 024d09e4 0018f29c 00000000 System_Windows_Forms_ni+0x1ad9b0
04 0018f1ec 00bc048b 02444410 0018f204 72f71396 System_Windows_Forms_ni+0x196129

Eventually I would find some non waiting stacks but it was not clear if these were the most expensive ones and why. The problem here is that most people are not aware that the actual drawing happens not in user mode but in an extended kernel space thread. Every time you wait in NtUserWaitMessage the thread on the kernel side can continue its execution but you cannot see what's happening as long as you are only looking at the user space side.

If debugging fails you can still use a profiler. It is about time to tell you some well hidden secret of the newest Windows Performance Toolkit. If you record profiling data with WPR/UI and enable the profile Desktop composition activity new views under Video will become visible when you open the trace file with WPA. Most views seem to be for kernel developers but one view named Dwm Frame Details Rectangle By Type is different. It shows all rectangles drawn by Dwm (the Desktop Window Manager). WPA shows not only the flat list of updated rectangles and its coordinates but it draws it in the graph for the selected time region. You can use this view as poor mans screenshot tool to visually correlate the displayed message boxes and other windows with the performed user actions. This way you can visually navigate through your ETL and see what windows were drawn at specific points in your trace!

image

That is a  powerful capability of WPA which I was totally unaware until I needed to analyze this WPF performance problem. If you are more an xperf fan you need to add to your user mode providers list

    • Microsoft-Windows-Dwm-Core:0x1ffff:0x6

and you are ready to record pretty much any screen rectangle update. This works only on Windows 8 machines or later. Windows 7 knows the DWM-Core provider but it does not emit the necessary events to draw the dwm rectangles in WPA. The rectangle drawing feature of WPA was added with the Win10 SDK Release of December 2016. Ok so we see more. Now back to our perf problem. I could see that only two threads are involved consuming large amounts of CPU in the UI thread and on the WPF render thread for a seemingly simple screen update. A little clicking around in the UI would cause excessive CPU usage. Most CPU is used in the WPF rendering thread

ntdll.dll!_RtlUserThreadStart
ntdll.dll!__RtlUserThreadStart
kernel32.dll!BaseThreadInitThunk
wpfgfx_v0400.dll!CPartitionThread::ThreadMain
wpfgfx_v0400.dll!CPartitionThread::Run
wpfgfx_v0400.dll!CPartitionThread::PresentPartition
wpfgfx_v0400.dll!CComposition::Present
wpfgfx_v0400.dll!CSlaveHWndRenderTarget::Present
wpfgfx_v0400.dll!CDesktopHWNDRenderTarget::Present
wpfgfx_v0400.dll!CDesktopRenderTarget::Present
wpfgfx_v0400.dll!CSwRenderTargetHWND::Present
wpfgfx_v0400.dll!CSwPresenter32bppGDI::Present
wpfgfx_v0400.dll!CMILDeviceContext::BeginRendering
user32.dll!NtUserGetDC
ntdll.dll!LdrInitializeThunk
ntdll.dll!_LdrpInitialize
wow64.dll!Wow64LdrpInitialize
wow64.dll!RunCpuSimulation
wow64cpu.dll!Thunk0Arg
wow64cpu.dll!CpupSyscallStub
ntoskrnl.exe!KiSystemServiceCopyEnd
win32kbase.sys!NtUserGetDC
ntoskrnl.exe!ExEnterPriorityRegionAndAcquireResourceShared
win32kbase.sys!_GetDCEx
wpfgfx_v0400.dll!CMILDeviceContext::EndRendering
user32.dll!ReleaseDC
user32.dll!NtUserCallOneParam
ntdll.dll!LdrInitializeThunk
ntdll.dll!_LdrpInitialize
wow64.dll!Wow64LdrpInitialize
wow64.dll!RunCpuSimulation
wow64cpu.dll!ServiceNoTurbo
wow64.dll!Wow64SystemServiceEx
wow64win.dll!whNtUserCallOneParam
wow64win.dll!ZwUserCallOneParam
ntoskrnl.exe!KiSystemServiceCopyEnd
win32kfull.sys!NtUserCallOneParam
ntoskrnl.exe!ExReleaseResourceAndLeavePriorityRegion
ntoskrnl.exe!KiCheckForKernelApcDelivery
ntoskrnl.exe!KiDeliverApc
win32kfull.sys!NormalAPCInvalidateCOMPOSITEDWnd
win32kbase.sys!EnterCrit …

If that does not make much sense to you, you are in good company. The WPF rendering thread is rendering a composite window (see CComposition::Present) which seems to use a feature of Windows which also knows about composite Windows. After looking with Spy on the actual window creation parameters of the hosting WinForms application

image

it turned out that the Windows Forms window had the WS_EX_COMPOSITED flag set. I write this here as if this is flat obvious. It is certainly not. Solving such problems always involves asking more people about their opinion what could be the issue. The final hint that the WinForms application had this extended style set was discovered by a colleague of me. Nobody can know everything but as a team you can tackle pretty much any issue.

A little googling reveals that many people before me had also problems with composite windows. This flag does basically inverse the z-rendering order. The visual effect is that the bottom window is rendered first. That allows you to create translucent windows where the windows below your window shine through as background. WPF uses such things for certain visual effects.

That is enough information to create a minimal reproducer of the issue. All I needed was a default Windows Forms application which hosts a WPF user control.

publicpartialclass Form1 : Form
    {protectedoverride CreateParams CreateParams
        {
            get
            {
                CreateParams cp = base.CreateParams;
                cp.ExStyle |= 0x02000000;  // Turn on WS_EX_COMPOSITEDreturn cp;
            }
        }public Form1()
        {
            InitializeComponent();
            cWPFHost.Child = new UserControl2();
        }
    }

The WPF user control is also very simple

publicpartialclass UserControl2 : UserControl
    {public UserControl2()
        {
            InitializeComponent();this.Loaded += UserControl2_Loaded;
        }void UserControl2_Loaded(object sender, RoutedEventArgs e)
        {
            HwndSource hwnd = System.Windows.PresentationSource.FromVisual(this) as HwndSource;
            HwndTarget target = hwnd.CompositionTarget;
            target.RenderMode = RenderMode.SoftwareOnly;
        }
    }

To get the high CPU issue three things need to come together

  1. Hosting window must have set the WS_EX_COMPOSITED window style.
  2. WPF child window must use composition.
  3. WPF child window must use software rendering.

When these three conditions are met then you have a massive WPF redraw problem. It seems that two composite windows cause some loops while rendering inside the OS deep in the kernel threads where the actual rendering takes place. If you let WPF use HW acceleration it seems to be ok but I have not measured how much GPU power is then wasted. Below is a screenshot of the sample Winforms application:

 

image

After was found the solution was to remove the WS_EX_COMPOSITED window style from the WinForms hosting window which did not need it anyway.

Media Experience Analyzer

The problem was solved but it is interesting to see the thread interactions happening while the high CPU issue is happening. For that you can use a new tool of MS named Media Experience Analyzer (XA) which was released in Feb 2016. If you have thought that WPA is complex then you have not yet seen how else you can visualize the rich ETW data. This tool is very good at visualizing thread interactions in a per core view like you can see below. When you hover over the threads the corresponding context switch and ready thread stacks are updated on the fly. If you zoom out it looks like a star field in Star Trek just with more colors.

image

If you want to get most out of XA you can watch the videos a Channel 9 which give you a pretty good understand how Media Experience Analyzer (XA) can be used

When should you use WPA and when Media Experience Analyzer?

So far the main goal of XA seems to be to find hangs and glitches in audio and video playback. That requires a thorough understanding of how the whole rendering pipeline in Windows works which is huge field on its own. But it can also be used to get a different view on the data which is not so easy to obtain in WPA. If threads are ping ponging each other this tool makes it flat obvious. XA is already powerful but I am not following entirely its UI philosophy where you must visually see the issue in the rendered data. Most often tabular data like in WPA is more powerful because you can sort by columns and filter away specific call stacks which seems not to be possible with XA. What I miss most in XA is a simple process summary timeline like in the first screenshot. XA renders some nice line graphs but that is not very helpful to get a fast overview of the total CPU consumption. If you look at the complete trace with the scheduler events and the per process CPU consumption in

XA

image

WPA

image

I am having a much easier time in WPA to identify my process with the table and color encoding. In XA you always need to hover over the data to see it actual value. A killer feature in XA would be a thread interaction view for a specific process. Ideally I would like to see all threads as bars and the bar length is either the CPU or wait time. Currently I can only see one thread color encoded on which core it is running. This is certainly the best view for device driver devs but normally I am not interested in a per core view but a per thread timeline view. Each thread should have a specific y-value and the horizontal bar length should show either its running or waiting time (or both) and with a line the readying thread as it is already done today.

That would be the perfect thread interaction view and I hope that will be added to XA. The current version is still a 1.0 so expect some crashes and bugs but it has a lot of potential. The issues I encountered so far are

  • If you press Turn Symbol Off while is still loading it crashes.
  • The ETL file loading time is very high because it seem to include some private MS symbol servers where the UI hangs for several minutes (zero CPU but a bit network IO).
  • UI Redraws for bigger (>200MB) ETL files are very slow. Most time seems to be spent in the GPU driver.
  • Spelling error in Scheduler view: Drivers, Processes, Threads per Core with Reaady threads.

XA certainly has many more features I have not yet found. The main problem with these tools is that the written documentation only touches the surface. Most things I have learned by playing around with the tools. If you want share your experiences with WPA or XA please sound off in the comments. Now stop reading and start playing with the next cool tool!

Virtual machines for testing browsers on Windows, Mac and Linux

How to hide control if DataContext is null in WPF?

Lock Pages In Memory

$
0
0

Originally posted on: http://geekswithblogs.net/HumpreyCogay/archive/2016/02/29/lock-pages-in-memory.aspx

SQL Server together with other RDBMS, are the most memory consuming applications on our servers, and this is because, RDBMs usually cache objects into the memory to take advantage of the speed that physical memory offers.

Sadly when windows feels that its physical memory is currently not enough for a driver and/or processes that is requesting some resources, it is forced to trim some of currently running application’s memory working set. Now that is bad news for SQL Server because windows will be forced to push the objects from the memory to the servers’ paging file. You can verify if windows is doing this to your SQL Server by investigating SQL Server Logs for this entries

A significant part of sql server process memory has been paged out.This may result in a performance degradation. Duration: 0 seconds. Working set (KB): 1086400, committed (KB), memory Utilization: 50%.

Currently there are 2 settings that we can play with to avoid or at least alleviate this situation

1. Properly set SQL Servers’ Max Memory settings, by setting aside enough memory for the OS and other running Processes like Antivirus and Server Monitoring Software.

2. Enable SQL Server Locked Page in Memory (LPIM)

For this Post we will be focusing on LPIM. When using LPIM Windows cannot simply touch the memory space used by SQL Servers’ Buffer Pool, it is locked and cannot be paged. SQL Server does this by using Address Windowing Extensions (AWE).

When LPIM is enabled you cannot simply view how much really SQL Server is using by viewing Task Manager. As you can see on the screen shot below, SQLSERVR.EXE is only using 49, 536Kb

image

You can however use RamMap (A free RAM tool from Sysinternals: www.sysinternals.com). To view how much memory AWE is using.

image

 

Or you can also use sys.dm_os_process_memory SQL Server Dynamic Management Views (DMV).

 

How to Enable LPIM

Use Windows Group Policy tool (gpedit.msc) to enable this policy for the account used by SQL Server. You must be a system administrator to change this policy.

1. On the Start menu, click Run. In the Open box, type gpedit.msc.

2. On the Local Group Policy Editor console, expand Computer Configuration, and then expand Windows Settings.

3. Expand Security Settings, and then expand Local Policies.

4. Select the User Rights Assignment folder.

The policies will be displayed in the details pane.

5. In the pane, double-click Lock pages in memory.

6. In the Local Security Setting – Lock pages in memory dialog box, click Add User or Group.

7. In the Select Users, Service Accounts, or Groups dialog box, add an account with privileges to run sqlservr.exe.

8. Log out and then log back in for this change to take effect.

Note about SQL Server 2008 R2 Standard Edition (64-bit):Microsoft SQL Server 2008 R2 Standard Edition (64-bit, all versions RTM and later) also requires trace flag 845 to be added as a startup parameter so that SQL Server can use locked pages for the Buffer Pool when the SQL Server service account is granted the Lock Pages in Memory security privilege

Note about SQL Server 2012 Standard Edition (64-bit): Microsoft SQL Server 2012 Standard Edition (64-bit) does not require you to enable any trace flag to allow SQL Server use locked pages for the Buffer pool when the SQL Server service account is granted the Lock Pages in Memory security privilege.

 

How to determine if LPIM is enabled

Option 1 (Tested SQL 2005 )

DECLARE @LockPagesInMemory VARCHAR(255);

SET @LockPagesInMemory = 'UNKNOWN';

DECLARE @Res TABLE

(

[output] NVARCHAR(255) NULL

);

IF (SELECT value_in_use

FROM sys.configurations c

WHERE c.name = 'xp_cmdshell'

) = 1

BEGIN

INSERT INTO @Res

EXEC xp_cmdshell 'WHOAMI /PRIV';

IF EXISTS (SELECT *

FROM @Res

WHERE [output] LIKE 'SeLockMemoryPrivilege%'

)

SET @LockPagesInMemory = 'ENABLED';

ELSE

SET @LockPagesInMemory = 'DISABLED';

END

SELECT LockPagesInMemoryEnabled = @LockPagesInMemory;

Option 2 (Tested SQL 2008)

select osn.node_id,

osn.memory_node_id,

osn.node_state_desc,

omn.locked_page_allocations_kb

from sys.dm_os_memory_nodes omn

inner join sys.dm_os_nodes osn on (omn.memory_node_id = osn.memory_node_id)

where osn.node_state_desc <> 'ONLINE DAC'

NOTE: A non zero value for locked pages allocation means Locked pages in memory is enabled

Option 3 (TestedSQL 2008)

select

(physical_memory_in_use_kb/1024)Memory_usedby_Sqlserver_MB,

(locked_page_allocations_kb/1024 )Locked_pages_used_Sqlserver_MB,

(total_virtual_address_space_kb/1024 )Total_VAS_in_MB,

process_physical_memory_low,

process_virtual_memory_low

from sys. dm_os_process_memory

NOTE: A non zero Locked_pages_allocation_KB means Locked pages in memory is enabled.

Option 4

Using Exec xp

execxp_readerrorlog 0, 1, 'locked pages'

execxp_readerrorlog 0, 1, 'lock pages in memory'

Technical References/Further Reading:

Great SQL Server Debates: Lock Pages in Memory
https://www.simple-talk.com/sql/database-administration/great-sql-server-debates-lock-pages-in-memory/

How to reduce paging of buffer pool memory in the 64-bit version of SQL Server
https://support.microsoft.com/en-us/kb/918483

Support for Locked Pages on SQL Server Standard Edition (64-bit) systems

https://support.microsoft.com/en-us/kb/970070

How to enable the "locked pages" feature in SQL Server 2012
https://support.microsoft.com/en-us/kb/2659143

Friendly, Readable Expression Trees

$
0
0

Originally posted on: http://geekswithblogs.net/mrsteve/archive/2016/02/29/friendly-readable-expression-trees-debug-visualizer.aspx

We all like playing around with working with Expression Trees, right? Creating type-safe functions at runtime when you don't know the types at compile time gives you great performance and is just plain neat. I'm using them in my pet object-object mapper, and need to look at the mapping functions it creates. Unfortunately, the default debug view for an Expression Tree looks something like this:

DebugView

…now maybe you're some coding savant who eats IL for breakfast, but I find that pretty unreadable.

So! To get a nicer look at my Expression Trees, I've written ReadableExpressions, a PCL with a single extension method which translates an Expression Tree into something friendlier, like:

VisualizerView

…yes, that's the same Expression Tree as the first screenshot Smile

Because I needed one, I also added an Expression for comments:

var comment = ReadableExpression.Comment("Anyone listening?");Expression<Action> beep = () => Console.Beep();var commentedBeep = Expression.Block(comment, beep.Body);var translated = commentedBeep.ToReadableString();const string EXPECTED = @"
// Anyone listening?
Console.Beep();";Assert.AreEqual(EXPECTED.TrimStart(), translated);

Uses and How to Download

The ReadableExpressions NuGet package (Install-Package AgileObjects.ReadableExpressions) provides the extension method:

Expression<Func<string, string, int>> convertStringsToInt = (str1, str2) => int.Parse(str1) + int.Parse(str2);var translated = convertStringsToInt.ToReadableString();Assert.AreEqual("(str1, str2) => int.Parse(str1) + int.Parse(str2)", translated);

…and I've used it to make Expression Debug Visualizers for Visual Studio 10, 12 and 14. They're in the root of the GitHub repo, or for download from:

Now - those are zip files containing a single dll each, and the domain they download from is new, so on Chrome, you might get the following:

Discard

…if so - and if you trust me, which you should because I'm nice - click the arrow next to Discard and choose 'Keep':

Keep

Installing the Expression Visualizer

To install the expression visualizer, download the appropriate version using one of the methods above, and copy it into {Program files}\Microsoft Visual Studio {version number}\Common7\Packages\Debugger\Visualizers. Visual Studio will magically use it the next time you start a debugging session. If you encounter any issues you can just delete the file, and VS will fall back to using the default.

sql server won't start as clustered resource after service pack upgrade

$
0
0

Originally posted on: http://geekswithblogs.net/influent1/archive/2016/03/01/173057.aspx

I updated my production SQL Server 2012 cluster from SP1 to SP3 CU1 last night and had to spend an hour trying to figure out why the SQL Server Engine service wouldn't start for one of my two instances.  Weirdly the other instance worked fine after the upgrade.  The error logs were no help at all.  It was only by the magic of the gods that I happened upon a registry entry that still had the old patch level in it.

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\<instance>\Cluster\SharedDataPatchLevel

As soon as I updated the value to the correct level (since I saw in the setup log that the update was successful) I was able to start the service in the Failover Cluster Manager.

How can I get a list of what windows patches were installed?


Do you understand REM and EM units?

Catching Up with UWP in LINQ to Twitter

$
0
0

Originally posted on: http://geekswithblogs.net/WinAZ/archive/2016/03/02/catching-up-with-uwp-in-linq-to-twitter.aspx

LINQ to Twitter recently released as v4.x, where one of its main goals was to support Universal Windows Platform development. LINQ to Twitter has supported many platforms and UWP is a natural evolution. One of the driving forces in the new version is the fact that UWP has it’s own HTTP client stack, which isn’t compatible with PCL. In retrospect, this was an opportunity because the generic nature of PCL means that you don’t inherently have platform-specific fidelity. (note: that said – there are techniques to achieve this, but because of the nature of PCL, it doesn’t come out of the box) This platform-specific motivation and its associated design might be a fun subject of another blog post, but this post is about achieving the primary goal of 4.x – to support UWP.

As I mentioned, UWP has it’s own HTTP stack and developers learned this quickly when LINQ to Twitter wasn’t compatible. Jose Fajardo created an early fork of LINQ to Twitter, modified to support UWP on GitHub. This was useful because not only did it identify where the problems were, it helped scope the amount of work required for the UWP solution. Thanks to Jose Fajardo. This was great, it worked, but LINQ to Twitter needed to support multiple platforms. So, v4.x was born.

As a LINQ to Twitter developer, you rarely come into contact with the HTTP client libraries that communicate with the Twitter API. That low-level communication is abstracted through convenience classes. Your LINQ to Twitter queries don’t change between v3.x and v4.x either. However, what does change is the OAuth authorization and you’ll learn how in this blog post. I’ll use a sample app that’s part of the LINQ to Twitter source code on GitHub and show you how to authorize, tweet, and query.

About Authorization

Twitter API security is based on OAuth, a security protocol that gives the user the ability to authorize an application to work on their behalf. The application that you write will expose an interface where a user will authorize your application to operate under that user’s credentials with Twitter. In case it isn’t clear, OAuth is user-centric, in that it gives the user control to protect themselves from abusive programs.

In this spirit, LINQ to Twitter offers a framework for Twitter’s flavor of OAuth and supplies wrappers for different technologies and Twitter API OAuth capabilities. Particular to UWP, v4.x introduces the UniversalAuthorizer. In v3.x, a Windows RT app would use a WinRtAuthorizer, but a Windows Phone app would use a PinAuthorizer that interacted with a Web control. The WinRtAuthorizer used the Windows WebAuthenticationBroker, which was a very nice way to support OAuth, but wasn’t available to Windows Phone. With UWP, the programming model is unified (was that a pun?), with the benefit that WebAuthenticationBroker works for Windows Apps on desktops/tablets, phones, and other devices. So, introducing UniversalAuthorizer facilitates that move, making Windows Phone a 1st class citizen in LINQ to Twitter v4.x authorization.

The Authorizer

The UniversalAuthorizer derives from LinqToTwitter.AuthorizerBase and implements IAuthorizer. You can use these as customization points or derive from UniversalAuthorizer yourself for additional functionality in your app. You can examine the source code, but behind the scenes, UniversalAuthorizer is using WebAuthenticationBroker, which is very convenient and works well. The following listing shows how to instantiate a new UniversalAuthorizer:

            var authorizer = new UniversalAuthorizer
            {
                CredentialStore = new InMemoryCredentialStore
                {
                    ConsumerKey = "",
                    ConsumerSecret = ""
                },
                Callback = "http://github.com/JoeMayo/LinqToTwitter"
            };

To create an authorizer (including UniversalAuthorizer), instantiate it as shown above. Notice that I also instantiate an InMemoryCredentialStore and assign it to the UniversalAuthorizer.CredentialStore property. You’ll need to give the ConsumerKey and ConsumerSecret properties their corresponding keys from your Twitter App page. The Callback property is required – though not used and you can set it to any Url you like – yes, it’s dumb and I have an open issue to research why. InMemoryCredentialStore implements LinqToTwitter.ICredentialStore and is another extensibility point. If you wanted, write a custom ICredentialStore implementation to manage values and storage and then plug it into the authorizer by assigning your ICredentialStore instance to the CredentialStore property. You can visit the Security Wiki in the LINQ to Twitter documentation for more information on available authorizers.

With a UniversalAuthorizer instance, you can start the authorization process.

Authorizing

Since LINQ to Twitter is async, you must await the AuthorizeAsync method. A typical symptom of forgetting to await an async method is a stack trace showing that the application died in the middle of the state machine with a NullReferenceException. Here’s an example of how to call AuthorizeAsync:

            await authorizer.AuthorizeAsync();

Once you call AuthorizeAsync via authorizer, LINQ to Twitter takes care of all the OAuth protocol details and you’ll see a login screen like this:

TwitterLogin

This is from Windows Phone and other devices will be similar. Fill in your login credentials, authorize via the Twitter Authorization page, and control returns to your app.

If you debug and examine authorizer.CredentialStore, you’ll see all 4 keys for that particular user, along with UserID and ScreenName. This is the time for you to read those values and save them, associated with the user using your app. The next time that user wants to perform any operation, retrieve these keys and populate CredentialStore. This will allow the user to use your application without needing to go through the log-in/Authorization process again. You can do this because Twitter does not change these keys, meaning you can save and reuse them for the next time the user wants you to access Twitter on their behalf.

Tip: If you re-populate all 4 keys, the user won’t need to go through the authorization process, which is convenient. However, UserID and ScreenName populate as part of the authorization process, so you should grab them the first time the user authorizes. If you need these values again later, because sometimes a ScreenName changes, you can do an Account/VerifyCredentials query.

After authorization success, or loading all 4 keys in the credential store, instantiate a TwitterContext, passing in the authorizer instance, like this:

            var ctx = new TwitterContext(authorizer);

You’ll see other libraries and examples where a library constructor accepts keys directly, without a special authorization object. That approach presumes your authorization strategy, of which there are several that Twitter supports– as does LINQ to Twitter. Such a simplification would lead new developers down a path of failure too often, so I opted for a safer, albeit slightly more complex, approach that also demonstrates how much freedom the developer has with authorization. e.g. you could use SingleUserAuthorizer to simplify server applications that only operate on behalf of your company user or ApplicationOnlyAuthorizer if your application doesn’t operate on behalf of a user at all. Instantiate the authorizer, authorize, and pass the authorizer (instance of LinqToTwitter.IAuthorizer) to TwitterContext.

Once you have a TwitterContext instance, you can tweet as explained next.

Tweeting

You can tweet and perform other LINQ to Twitter commands via the TwitterContext instance (ctx in this example):

            Status tweet = await ctx.TweetAsync(userInput);

The userInput variable is some text you want to tweet. As I mentioned earlier, remember that LINQ to Twitter is async and you must await commands and queries.

The response from the Twitter API contains the new tweet details. In particular for this example, tweet contains a StatusIDResponse property, which contains the ID of the new status. Since this Status type is the same one used for queries, it has a StatusID, used for input, and StatusIDResponse, used for output. Anytime there’s an input where twitter returns an output of the same name, the convention is for the output to have a “Response” suffix.

The previous examples explained authorization and how to tweet and you still haven’t seen any LINQ. I’ll fix that in the next section that shows how to perform a query.

Querying

The next set of examples show how to create a different type of authorizer, perform a query, and consume the results of that query.  The particular query is a Twitter Search. A search doesn’t operate on behalf of a user, so you don’t need an authorizer that requires user authorization. This example, shown next, instantiates an ApplicationOnlyAuthorizer:

            var authorizer = new ApplicationOnlyAuthorizer
            {
                CredentialStore = new InMemoryCredentialStore
                {
                    ConsumerKey = "",
                    ConsumerSecret = ""
                }
            };

            await authorizer.AuthorizeAsync();
            var ctx = new TwitterContext(authorizer);

The ApplicationOnlyAuthorizer only needs ConsumerKey and ConsumerSecret. Since it’s based on the application, there isn’t a user authorization process and you don’t have the intermediate step of re-directing to the Twitter Authorization page and back. Like normal, call AuthorizeAsync and pass the authorizer during instantiation of TwitterContext. Now you can perform a search, like this:

            Search searchResponse =
                await
                (from search in ctx.Search
                 where search.Type == SearchType.Search &&
                       search.Query == searchString
                 select search)
                .SingleOrDefaultAsync();

You can see that this is LINQ syntax and there are some features that are particularly interesting in this LINQ to Twitter query: entity, type, and async. TwitterContext contains several IQueryable<T> properties that refer to different types of queries that you can make. These categories roughly map to similar categories in the Twitter API and because the Twitter API has evolved, the categories don’t always match cleanly. That said, the entities and categories they represent are mostly semantically equivalent and help in the readability of the code. The LINQ to Twitter documentation contains a map between APIs and documentation for each LINQ to Twitter command or links in query documentation to corresponding Twitter API endpoints in case you need the extra help. For each entity, there is a type, which helps organize the queries. For Search, there is only one SearchType.Search, but other entities have many types that you can query. Each query has properties, that correspond to Twitter API parameters. These parameters are documented in the LINQ to Twitter Documentation. Remember to only use parameters listed in the documentation – the others correspond to output values and won’t work.

Tip: The Twitter API is a REST endpoint that only accepts parameters that the API specifies. You are using LINQ, but it’s a Twitter API specific dialect and you can’t perform the same operations you would with a SQL database. This is the inherent nature of the data source. The typical work-around for manipulating data is to perform the query, pulling in all the data you’ll need and then using LINQ to Objects once you have that data in memory.

As mentioned earlier, LINQ to Twitter is async, so you await queries too. LINQ to Twitter was the first 3rd party LINQ provider, outside of Microsoft, to support async. I waited a while to see how Microsoft would implement this and received an answer when they added async to the Entity Framework (EF). The LINQ to Twitter async implementation is syntactically similar to EF in that you await the query and materialize it with a standard operator containing the “Async” suffix. You must use the standard operator overload with the “Async”suffix because it returns a Task<T>, where T is Search in this example. The entity type is IQueryable<Search> and queries on it return a List<Search> response type. The Search query returns a single instance of the Search type, so SingleOrDefaultAsync works well. Other queries return a collection and you’ll want to use ToListAsync in those cases. Each type is different and you can find out about their contents via the LINQ to twitter documentation for entities. In the case of a search, here’s how you can access the information returned from the Twitter API:

            List<TweetViewModel> tweets =
                (from tweet in searchResponse.Statuses
                 select new TweetViewModel
                 {
                     ImageUrl = tweet.User.ProfileImageUrl,
                     ScreenName = tweet.User.ScreenNameResponse,
                     Text = tweet.Text
                 })
                .ToList();

The Search instance, searchResponse, has a Statuses property that is a List<Status>. After the query materializes, you can use LINQ to Objects to manipulate the collection of results any way you want. This example projected into a List<TweetViewModel> for UI presentation.

The source code is in the Samples folder of the LINQ to Twitter GitHub repository.

Summary

The primary support for UWP was with a new HttpClient and the unification of the authorization model via UniversalAuthorizer. UniversalAuthorizer works for Desktop/Tablet and Windows Phone apps. The authorization types are extensible through IAuthorizer and ICredentialStore, but you can use the implementations that LINQ to Twitter provides too. Remember that LINQ to Twitter is async and bad things happen when you forget to await commands and queries. Once you’ve authorized and instantiated a TwitterContext, use that TwitterContext instance to perform commands and queries with the Twitter API. If you need help, there’s extensive documentation in the LINQ to Twitter GitHub Wiki and you can ask questions on StackOverflow with the linq-to-twitter and twitter tags (tip: sometimes including the C# tag gets a quick answer).

 

@JoeMayo

Spotting a Missing Object

$
0
0

Originally posted on: http://geekswithblogs.net/mrsteve/archive/2016/03/03/spotting-missing-objects.aspx

There are various tell-tale signs when a system is missing an object, and I spotted some of them recently while writing the ReadableExpressions library. ReadableExpressions parses an Expression tree using translator objects, each of which deals with one or more ExpressionTypes. Various types of expressions have zero-or-more expressions nested within them, namely:

  • Lambdas
  • Loops
  • Conditional statements (if / else or ternary)
  • Switch case statements
  • Try / catch / finally statements

...and in each of those cases you may or may not want the statement(s) enclosed in braces, or want a single-line statement to end with a semi-colon.

My first approach to translating blocks of statements used the following signature:

string TranslateExpressionBody(Expression body,Type returnType,bool encloseSingleStatementsInBrackets = true)

There's a red flag there already - the boolean parameter - but at that point I was only using it to translate Lambdas, Loops and very simple Conditionals, so it worked ok.

As the test cases for ConditionalExpressions got more complex, the problems started. TranslateExpressionBody returned a string which its callers were having to examine to see if it contained newlines, and having to indent or wrap with braces, or both, or neither. The extra work which was being done (and duplicated) on the results of the translation was a sign of a missing object.

So! Enter CodeBlock:

class CodeBlock{public bool IsASingleStatement;public string AsExpressionBody();public CodeBlock Indented(); public string WithoutBrackets();public string WithBrackets();}

...which as you can see, encapsulates the various things you might want to do or know about a block of code. Eventually TranslateExpressionBody became:

CodeBlock TranslateExpressionBody(Expression body)

Much better!

So, to sum up - you may be missing an object if you find yourself:

  • Undoing or second-guessing work done in one place somewhere else
  • Performing further operations on the result of a method
  • Using boolean parameters to obtain slightly different results from a method
  • Duplicating code

In these cases, you may benefit from moar objects Smile

Better single-page apps with ASP.NET MVC 6 - Steve Sanderson NDC2016

$
0
0

Originally posted on: http://geekswithblogs.net/Aligned/archive/2016/03/04/better-single-page-apps-with-asp.net-mvc-6---steve-sanderson.aspx

I watched the very interesting talk by Steve Sanderson (KnockoutJs creator) from NDC 2016 yesterday and took some notes and screenshots that are worth sharing. Watch the video, but here is my summary.

With Steve on the MVC team, there are a lot of helpful and timesaving features coming to MVC. He says MVC 6, but from the middleware and Mac he uses it has to be ASP .Net Core 1.

I’m interested in the NodeServices, but I’m currently using Knockout and want to use Aurelia so it’d be good to see more than Angular and React as options.

Here are some notes and screenshots from what peaked my interest:

Minute 15:

MVC has packages and helpers for Angular and React, more coming

AngularJS routing helper method from the Nuget package

validation too, the WebApi usage and integration is what I'm interested in

min 28: Razor @Html.PrimeCache( > shoves it into the html so it doesn't have to be a separate Http request

Prerender on the server: <app asp-ng2-prerender-module=”wwwroot/ng-app/components/app/app”>

  • load faster
  • can at least see it without JavaScript, but buttons don’t work
  • My thought: If this could still let you post a form, it would help with the Progressive Enhancement idea. An escalator still works as stairs if they are broken. Some things should still be possible on bad/slow connections, maybe even without JavaScript.

nodeServices1

 

Minute 57:

Transpile es2015 to es5 on the server on the fly with middleware

  • Grunt/Gulp then babel are a lot of tools to learn and get setup, what if MVC did it for you?
  • NodeServices is used on the server to run node and do the transpiling. I wonder if that can be cached (304) in the normal way?

nodeServices2

nodeServices3

 

nodeServices4

 

I’m hoping to get a chance to use this someday soon!

PowerShell: Adding Windows Version to Desktop

$
0
0

Originally posted on: http://geekswithblogs.net/ajames/archive/2016/03/06/powershell-adding-windows-version-to-desktop.aspx

I’m forever looking this up on the web then having to use Reedit to hack my registry so I thought it was about time I wrote a PS script to do it for me. And so here it is!

Get-PSDrive
CD c:\
$RegKey = "HKCU:\Control Panel\Desktop"
#CD 'HKCU:\Control Panel\Desktop'
#Get-ItemProperty -Path $RegKey -Name PaintDesktopVersion

Set-ItemProperty -Path $RegKey -Name PaintDesktopVersion -Value 1
#Get-ItemProperty -Name PaintDesktopVersion

Define a Class in Multiple Files in Node.js

$
0
0

Originally posted on: http://geekswithblogs.net/shaunxu/archive/2016/03/07/define-a-class-in-multiple-files-in-node.js.aspx

Features always grows much faster than we expect. When I am build Worktile Pro I created a JavaScript file contains all business logic in class for the task module. But after several months development it became over 7000 lines of code, which is horrible. Last week I decided to split it into multiple files.

It may not be a big problem to split one JavaScript file into multiple, especially in Node.js environment. We can put functions and variables into many files as we wanted and "require" them in the "main" file. But if we want to split a class definition into multiple files that might not work. In JavaScript a class is a function in essential, and it can be defined only in one file. For example, in the code below I defined a class named "MissionService" with some method in file "mission.js".

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
(function(){'use strict';varMissionService=function(){};MissionService.prototype.createTask=function(taskName){console.log('Task: "'+taskName+'" was created.');};MissionService.prototype.loadTask=function(taskId){console.log('Task ('+taskId+') was loaded.');};MissionService.prototype.updateTask=function(taskId,taskName){console.log('Task ('+taskId+') was changed to "'+taskName+'".');};MissionService.prototype.removeTask=function(taskId){console.log('Task ('+taskId+') was removed.');};MissionService.prototype.restoreTask=function(taskId){console.log('Task ('+taskId+') was restoreTask.');};exports=module.exports=MissionService;})();

 

First step is to move the class definition into an "index" file, which will "require" all following files later. As you can see this "index.js" file only contains the class definition and exports it.

1
2
3
4
5
6
7
8
9
(function(){'use strict';varMissionService=function(){};exports=module.exports=MissionService;})();

 

Now we can create a "partial" class definition file based on the "index" I created. Just exports a function which allow the class can be passed so that I can define its methods through "PartialClass.prototype".

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
// mission.create.js(function(){'use strict';exports=module.exports=function(MissionService){MissionService.prototype.createTask=function(taskName){console.log('Task: "'+taskName+'" was created.');};};})();// mission.update.js(function(){'use strict';exports=module.exports=function(MissionService){MissionService.prototype.updateTask=function(taskId,taskName){console.log('Task ('+taskId+') was changed to "'+taskName+'".');};MissionService.prototype.removeTask=function(taskId){console.log('Task ('+taskId+') was removed.');};MissionService.prototype.restoreTask=function(taskId){console.log('Task ('+taskId+') was restoreTask.');};};})();// mission.find.js(function(){'use strict';exports=module.exports=function(MissionService){MissionService.prototype.loadTask=function(taskId){console.log('Task ('+taskId+') was loaded.');};};})();

 

Now back to the "index" file, what we need to do is to "require" this partial class file, put the class we defined into the parameter so that it will attach methods.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
(function(){'use strict';varMissionService=function(){};require('./mission.create.js')(MissionService);require('./mission.update.js')(MissionService);require('./mission.find.js')(MissionService);exports=module.exports=MissionService;})();

 

Finally when we want to use this class, just "require" the "index" file and "new" an instance as below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
(function(){'use strict';varMissionService=require('./index.js');varmission=newMissionService();mission.createTask('Shaun\'s task.');mission.loadTask(1);mission.updateTask(1,'Shaun\'s new task.');mission.removeTask(1);mission.updateTask(1);})();

 

Screen Shot 2016-03-07 at 10.34.23

If we have some internal helper functions or variants we can put them into some "shared" files. 

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
(function(){'use strict';exports=module.exports=function(MissionService){MissionService.prototype._log=function(message){console.log(message);};};})();

 

Just ensure we "require" them before we "require" PartialClass files that are using them.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
// index.js(function(){'use strict';varMissionService=function(){};require('./mission.shared.js')(MissionService);require('./mission.create.js')(MissionService);require('./mission.update.js')(MissionService);require('./mission.find.js')(MissionService);exports=module.exports=MissionService;})();// mission.create.js(function(){'use strict';exports=module.exports=function(MissionService){MissionService.prototype.createTask=function(taskName){this._log('Task: "'+taskName+'" was created.');};};})();// mission.update.js(function(){'use strict';exports=module.exports=function(MissionService){MissionService.prototype.updateTask=function(taskId,taskName){this._log('Task ('+taskId+') was changed to "'+taskName+'".');};MissionService.prototype.removeTask=function(taskId){this._log('Task ('+taskId+') was removed.');};MissionService.prototype.restoreTask=function(taskId){this._log('Task ('+taskId+') was restoreTask.');};};})();// mission.find.js(function(){'use strict';exports=module.exports=function(MissionService){MissionService.prototype.loadTask=function(taskId){this._log('Task ('+taskId+') was loaded.');};};})();

 

At the end, we can put all files into a folder and rename the "index" as "index.js". Now we could require our class by the folder name, which is more friendly.
Screen Shot 2016-03-07 at 10.40.50

 

Hope this helps,

Shaun

All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Xu. This work is licensed under the Creative Commons License.

SharePoint 2016 Configuration Change to Support AppFabric Background Garbage Collection

$
0
0

Originally posted on: http://geekswithblogs.net/bjackett/archive/2016/03/08/sharepoint-2016-configuration-change-to-support-for-appfabric-background-garbage.aspx

!Note: This post is written as of the SharePoint 2016 Release Candidate.  Pre-release software is subject to change prior to release.  I will update this post once SharePoint 2016 hits RTM or the related information has changed!

   In this post I’ll walk through the steps to enable background garbage collection for AppFabric 1.1 which is used by  the SharePoint 2016 Distributed Cache service.  I also provide a sample PowerShell script to automate the change.  Skip down to the Solution section for specific changes and a script to automate implementing the change.

 

Background

   The change that I describe is not a new one.  It was first introduced during SharePoint 2013’s lifecycle when Microsoft AppFabric 1.1 Cumulative Update 3 (CU3) was released.  CU3 allowed for a non-blocking garbage collection to take place but in order to take advantage of this capability an administrator needed to update a Distributed Cache configuration file (described below in the Solution section).  Later Microsoft AppFabric cumulative updates also require this same change to the configuration file.

   Fast forward to SharePoint 2016 which continues to use Microsoft AppFabric 1.1 for the Distributed Cache service.  As of the release candidate (RC) SharePoint 2016 ships with Microsoft AppFabric 1.1 Cumulative Update 7.  Since this cumulative update builds upon CU3 it also requires the same configuration file change to enable background garbage collection.

 

Problem

  Depending on server configuration, hardware, workloads being run, and more factors a SharePoint farm may or may not experience any issues with the Distributed Cache service if the background garbage collection change has not been applied.  In my lab environment I simulated load (10-50 requests / sec) against the SharePoint Newsfeed.  After a few minutes I began to experience issues with Newsfeed posts not appearing and eventually the Distributed Cache service instances crashed on the two servers hosting that service.  A restart of the AppFabric service allowed the Distributed Cache to recover and function normally again.

 

Solution

   The configuration change to allow for background garbage collection in Microsoft AppFabric 1.1 is outlined in Cumulative Update 3.  An administrator who has access to the SharePoint server(s) hosting the Distributed Cache service will need to perform the following actions.

  1. Upgrade the Distributed Cache servers to the .NET Framework 4.5 (as of the publishing of this blog .Net 4.5 is no longer supported and .Net 4.5.2 will need to be installed.)
  2. Install the cumulative update package (already installed for SharePoint 2016 Release Candidate).
  3. Enable the fix by adding / updating the following setting in the DistributedCacheService.exe.config file:
    <appSettings><add key="backgroundGC" value="true"/></appSettings>
  4. Restart the AppFabric Caching service for the update to take effect.
Note: By default, the DistributedCacheService.exe.config file is located under the following directory:
”%ProgramFiles%\AppFabric 1.1 for Windows Server” where %ProgramFiles% is the folder where Windows Program Files are installed.

 

   While it is possible to modify this file by hand it is preferred to automate this process especially when multiple servers need to be updated.  The below script leverages the System.Configuration.ConfigurationManager class to make the necessary changes on an individual server running the Distributed Cache service.

Note: This script must be run from each server running the Distributed Cache service.  For an automated way to run on all Distributed Cache servers in a SharePoint farm see the PowerShell snippet following this script.

 

THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.

 

Download link:

https://gallery.technet.microsoft.com/SharePoint-update-7816fa74

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020


[system.reflection.assembly]::LoadWithPartialName("System.Configuration") | Out-Null

# intentionally leave off the trailing ".config" as OpenExeConfiguration will auto-append that
$configFilePath = "$env:ProgramFiles\AppFabric 1.1 for Windows Server\DistributedCacheService.exe"
$appFabricConfig = [System.Configuration.ConfigurationManager]::OpenExeConfiguration($configFilePath)

# if backgroundGC setting does not exist add it, else check if value is "false" and change to "true"
if($appFabricConfig.AppSettings.Settings.AllKeys -notcontains "backgroundGC"
)
{
   
$appFabricConfig.AppSettings.Settings.Add("backgroundGC", "true"
)
}

elseif ($appFabricConfig.AppSettings.Settings["backgroundGC"].Value -eq "false"
)
{
   
$appFabricConfig.AppSettings.Settings["backgroundGC"].Value = "true"
}

# save changes to config file
$appFabricConfig.Save(
)

 

   Optionally the following snippet can be run from any machine in a SharePoint farm that has the SharePoint commandlets available.  This will identify each Distributed Cache server and remotely run the previous script to implement the Distributed Cache configuration change.

Note: Update $UpdateDistributedCacheScriptPath with the path of the above script.  Also ensure that  PowerShell remoting is enabled and the account running the script has access to the target machines.

 

001
002
003
004
005
006
007
008
009
010
011
012


$UpdateDistributedCacheScriptPath = "C:\Scripts\UpdateDistributedCacheBackgroundGCSetting.ps1"

$serversRunningDistributedCache = Get-SPServiceInstance | where typename -eq "Distributed Cache" | select server | %{$_.Server.ToString().Split('=')[1]}

foreach($server in $serversRunningDistributedCache
)
{
   
Write-Verbose "Modifying config file on server: $server"
    Invoke-Command -FilePath $UpdateDistributedCacheScriptPath -ComputerName $server
    Write-Verbose "Script completed on server: $server"
}

 

Conclusion

   In this post I walked through the update required to enable background garbage collection in Microsoft AppFabric 1.1 Cumulative Update 3 and higher.  This configuration change is required for SharePoint 2013 or SharePoint 2016 (as of Release Candidate).  I also provided a script for automating the process of implementing this configuration change.  I’m told a future update may automatically apply this change for SharePoint 2016.  If and when that change is released I’ll update this post to reflect that change.

 

      -Frog Out


Easy Handle Leak Detection Without A Debugger

$
0
0

Originally posted on: http://geekswithblogs.net/akraus1/archive/2016/03/14/173308.aspx

Finding handle leaks in all processes at once for all handle types without a debugger is no longer impossible. Since Windows 8.1 (0?) each handle creation and close call is instrumented with an ETW event. You only need to turn it on, execute your use case for some minutes or hours if you really need to and then stop the recording.

To start full handle tracing you need to install the Windows Performance Toolkit from theWindows 10 SDK or WDK. Then enter in an Administrator shell

  • wpr -start Handle
  • Execute your use case
  • wpr -stop c:\temp\Handle.etl

Then you can open the resulting .ETL file with WPA and add the graph Handles - Outstanding Count by Process to your analysis view.

image

Now you can filter for your process (e.g. in my case I did start Visual Studio). The original view gives me a system wide view of all processes which did allocate handles.

image

That is a nice view but if you are after a handle leak you need to Create Stack. No problem. Right click on the table header and add the Create Stack to the column list. Then you should load the symbol from MS and add your local symbol paths

image

With the call stacks you can drill into the allocation stack of any handle and search for your leak:

image

The graph nicely shows the not yet freed handles but the table shows all allocations which can be a bit confusing when you search for the not yet released handles. For big handle leaks the existing view is already enough but if you need in the table to drill down only into call stacks of not yet released handles you need to add a filter to exclude all lines in the table which have released a handle before the trace was stopped.

More Details

To add that filter click on the open the gear icon or press Ctrl+E:

image

Because we are doing advanced things we click on the Advanced icon

image

and there we can finally add the trace end time which is visible at the bottom of the WPA main window

image

Now the graph and the table is updated which now only shows the handles which have not been released since the start of Visual Studio in our example which should match the number of allocated handles shown by Task Manager.

image

You can also get more fancy. Normally I have some test which shows after some time a handle leak in a specific process. I start leak tracing and then the test and later I stop it. Since I do not want to treat first time initialization effects as leaks I can exclude the e.g. first 5 minutes of the test to get rid of first time init effects. I also want to make sure that I do not get handles as leaks which are allocated at the end because the test was still running at the end of the trace. To do that I need to look for recurring patterns in the trace and exclude all allocated handles which were created at some later time when the test run was just complete. The final result is a filter which hides all entries which match

[Close Time]:<"20,861s" OR [Create Time]:<"5s" OR [Create Time]:>"15s"

After all noise is removed any handle leak, even small ones are only a matter of drilling into the allocation call stacks and fixing the code. If you have a handle leak on a Windows 8.1 (0?) or later machine this approach is much easier and faster than to use Windbg and the !htrace command which is nicely explained at https://blogs.technet.microsoft.com/yongrhee/2011/12/19/how-to-troubleshoot-a-handle-leak/.

Why So Late?

I have no idea why this very useful capability of WPA was never documented anywhere. It showed up in the Windows 8 SDK years ago but Handle leak tracing did never work because I was at that time still with Windows 7.

Which Handle Type did I Leak?

The easiest way is to use another tool. Process Hacker is a Process Explorer clone which can show for any process a nice summary. Double click on a process and select the Statistics tab:

image

When you click on Details you can sort by Handle Count and you immediately know for which handle type you are searching a leak:

image

PerfView for Advanced Recording

The only other tool I know of which can enable handle leak tracing is PerfView v1.9 from 2/19/2016 or later

image

PerfView has the unique capability to stop tracing based on a performance counter threshold. This is extremely useful to find e.g. a sudden handle spike which occurs during a stress test over night at 5 a.m. in the morning but when you arrive at 6 a.m. (you are already too late Zwinkerndes Smiley) at the office the handle spike will long be overwritten by newer handle allocations of the 500MB ring buffer. Now you can get your breakfast and arrive relaxed at 9 a.m where you can start analyzing the random handle spike which your colleagues were missing while they were sitting in front of Windbg over night and present the results at 10 a.m in the morning to your manager.

The only issue I have with PerfView is that its performance counter query is locale sensitive which makes it not trivial to specify it on e.g. a Hungarian machine. For the record: On my German machine I can start Handle leak tracing which stops when the performance counter for the the first devenv instance has a value greater than 2000 handles with

  • perfview collect c:\temp\HandleLeak.etl /kernelEvents=Handle /StopOnPerfCounter:"Prozess:Handleanzahl:devenv>2000"

The feature finally seems to have been set free with the Windows 10 SDK but handle leak tracing exists also since Windows 8.1 (0?) in the kernel but no tool was capable to enable it until now. Before that ETW feature Handle leaks have been quite hard to track down but with such advanced and pretty easy to use tooling it is just a matter of two command line calls to get all allocated handles from all processes in one go.

If you leak User (Windows, Menus, Cursors, …) or GDI objects (Device Contexts, Brushes, Fonts, …) you still need to resort to intercepting the corresponding OS methods in your target process like I have shown in Generic Resource Leak Detection with ETW and EasyHook but as usual you need to use the right tool for the job at hand to nail all bugs of your application.

Conclusions

With the addition of ETW tracing to handle allocations it has never been so easy to solve handle leaks. Previously it was a pretty complex undertaking but now you can follow the steps above and you will have a nearly 100% fix rate if you analyze the gathered data correctly. If this has helped you to solve a long searched leak or you have other useful information you want to share sound off in the comments.

About

Advanced CSS Selectors you never knew about

Pay Attention to Use ES6 Arrow Function with 'arguments'

$
0
0

Originally posted on: http://geekswithblogs.net/shaunxu/archive/2016/03/15/pay-attention-to-use-es6-arrow-function-with-arguments.aspx

There is an enhancement in ECMAScript 6 named "Arrow Functions", which likes lambda expression in C#, allow us to define a function within less lines of code. I like this new feature and began to use it in my Node.js application as many as I could. But today when II  was using JavaScript build-in "arguments" variant I found something wrong.

 

Assuming we have a module to add parameters, which is very simple. I'm using Arrow Functions to implement as below.

// calc.js

(() => {'use strict';

    exports.add = (x, y) => {return x + y;
    };
})();

Then I can use it as below.

// app.js

(() => {'use strict';const calc = require('./calc.js');

    let x = 2;
    let y = 3;
    let result1 = calc.add(x, y);console.log(`${x} +${y} =${result1}`);

})();

 

Now I created another method in my module allows user to input multiple numbers to add. In traditional JavaScript way I don't need to define arguments in the function. I can use "arguments" variant, which is a "semi-array" object contains parameters, add each of them and return the summary.

// calc.js
(() => {'use strict';

    exports.add = (x, y) => {return x + y;
    };

    exports.addMany = () => {
        let args = [].slice.call(arguments);
        let result = 0;for (let x of args) {
            result += x;
        }return result;
    };

})();

// app.js
(() => {'use strict';const calc = require('./calc.js');

    let x = 2;
    let y = 3;
    let result1 = calc.add(x, y);console.log(`${x} +${y} =${result1}`);

    let x1 = 1;
    let x2 = 2;
    let x3 = 3;
    let x4 = 4;
    let x5 = 5;
    let x6 = 6;
    let x7 = 7;
    let result2 = calc.addMany(x1, x2, x3, x4, x5, x6, x7);console.log(`result2 =${result2}`);

})();

 

But when I ran this application I got an error below.

I'm using Node.js v5.7.0 which supports ES6 features.

Screen Shot 2016-03-15 at 15.55.10

 

If we read Arrow Function specification carefully we will find that it captures the "this" value of the enclosing context, so the following code works as expected. This provides convenient to use parent "this" inside arrow function without needing to specify another variant to hold parent's "this" value. But the side effect is, it also captures the "arguments" value from the parent context.

In my code I defined "addMany" function in arrow function mode. It copied "this" from parent context, which is the whole module, as well as "arguments", which is the module loading function arguments.

Screen Shot 2016-03-15 at 16.03.29

To fix this problem, just simply define this function normally as below. It will use its own "this" and "arguments".

exports.addMany=function () {
    let args = [].slice.call(arguments);
    let result = 0;for (let x of args) {
        result += x;
    }return result;
};

 

Screen Shot 2016-03-15 at 16.05.54

Alternatively, if you are OK to enable one of Node.js ES6 staging features called "Rest Parameters" you can define the function as below, which allows parameters to be passed in as a real array.

exports.addMany = (...args) => {
    let result = 0;for (let x of args) {
        result += x;
    }return result;
};

Then execute this application with Node.js options called "--harmony_rest_parameters".

Screen Shot 2016-03-15 at 16.10.03

 

Hope this helps,

Shaun

All documents and related graphics, codes are provided "AS IS" without warranty of any kind.
Copyright © Shaun Xu. This work is licensed under the Creative Commons License.

ASP.NET Core and MVC 6 Lessons Learned

$
0
0

Originally posted on: http://geekswithblogs.net/mrsteve/archive/2016/03/15/asp.net-core-mvc-6-lessons.aspx

I recently finished a small website using ASP.NET Core and MVC 6 - I only scratched the surface of the framework, but here's some gotchas and things I picked up along the way. If you're entirely unfamiliar with ASP.NET Core and MVC 6, it might be a good idea to read up a bit on that first.

Gotchas!

node_modules Folder

The default project.json contains the following:

"exclude": [  "wwwroot",  "node_modules"]

…defining folders to ignore when publishing the project. "Well" I thought, "I'm not using node, so I can clean that up a bit":

"exclude": [  "wwwroot"]

That's better! Admittedly only OCD-better, but that still counts :p No point excluding the node_modules folder if there isn't going to be one, right?

Well, once I started using Gulp for CSS minification my project wouldn't build! I got this:

NodeError

"The design time host build failed with the following error:" - with no further error details. With diagnostic build output I found a 'path too long' error as detailed here, and through that bug report I [eventually] figured out that the path in question waaaaaas… node_modules. Gulp had added node files in that directory and the compilation process was falling over when they were included in the build. Adding node_modules back into the exclude setting fixed it, but that took an annoying amount of time to figure out.

DI Concrete Types

I usually use StructureMap for DI, so I'm used to injecting concrete types into constructors without having to think about it. ASP.NET Core comes with its own built-in DI container, but it doesn't support concrete dependencies without them being configured. Like this!

public void ConfigureServices(IServiceCollection services)
{    services.AddSingleton<EmailSender>();}

…hardly a chore, but this was a very simple application. On anything of significant complexity I'll use StructureMap instead of the baked-in container.

Routing

The normal MVC /controller/action routing didn't work right out of the box (not that it usually does) so I added attribute-based routing like this:

[Route("[controller]")]public class ContactController :Controller{[HttpPost]
    [Route("Send")]public async Task<IActionResult> Send(ContactData senderData)
    {
// Omitted
}}

…and that worked fine. But seeing as all my attributes were doing was setting up the default routes, I switched to setting up a default route in StartUp.Configure(), like this:

public void Configure(IApplicationBuilder app)
{app.UseMvc(routes =>{routes.MapRoute(name: "Default",
            template: "[controller]/[action]",
            defaults: new { controller = "Home", action = "Index" });});}

…but that didn't work. What? I changed it to the built-in default route method:

public void Configure(IApplicationBuilder app)
{app.UseMvcWithDefaultRoute();}

…and that didn't work either. I put the attributes back on, it worked. I removed them, it didn't. Eventually through some magical incantation of removing and re-adding route configuration - switching it off and back on again in other words - UseMvcWithDefaultRoute() worked without routing attributes. Not sure what happened there.

Package.json

package.json is the configuration file used by Bower to manage dependencies Node.js needs to perform Gulp's client-side tasks. As an aside I have a knee-jerk reaction against using different package managers for client- and server-side packages, but I guess client-side package management is a task already performed well by Bower, so there's sense in using it for that instead of NuGet… I guess?

Anyway, package.json does not appear in Solution Explorer:

SolutionExplorer

…you get to it like this:

PackageJsonMenu

…that wasn't terribly intuitive to me given that project.json (which contains the server-side dependencies) appears in Solution Explorer just fine. You can actually make package.json appear by removing the following line from your xproj file:

<ItemGroup><DnxInvisibleContent Include="bower.json" /><DnxInvisibleContent Include=".bowerrc" /><DnxInvisibleContent Include="package.json" /> <!-- This one! --></ItemGroup>

...and I suspect doing so has no negative side-effects, but I don't know for sure, so I didn't bother.

Cool Stuff

Tag Helpers

Tag helpers are a less obtrusive alternative to MVC 5's many Html.Blah() helper methods, and IMO give you much cleaner view markup:

@* Helper method version *@
@Html.TextBoxFor(m => m.Subject, new { @class = "wide" })@* Tag Helper version *@<input asp-for="Subject" class="wide" />

You can read more about them at the link above, but I found adding attributes to standard markup much nicer than using the helper methods.

Transparent Azure Configuration

I'm hosting my project on Azure, and wanted to use the application configuration settings available in the portal. After a false start using CloudConfigurationManager (which NuGet installed without fuss but which didn't work at all) it turned out that an ASP.NET Core application hosted on Azure transparently uses the application settings if they're available. All I had to do was set up configuration in the standard way:

public class Startup{public Startup()
    {Configuration = new ConfigurationBuilder().AddJsonFile("appSettings.json", optional: true).AddEnvironmentVariables().Build();}public IConfiguration Configuration { get; set; }

…and values are automagically pulled from Azure settings if they exist. Adding the Startup.Configuration property instance to the built-in DI container like this:

public void ConfigureServices(IServiceCollection services)
{    services.AddInstance(Configuration);}

…makes IConfiguration accessible as an injected dependency, like this:

public class EmailSender{private readonly IConfiguration _settings;public EmailSender(IConfiguration settings)
    {_settings = settings;}public async Task SendAsync(ContactData senderData)
    {var localDomain = _settings["LocalDomain"];

…which saves you the task of abstracting your configuration - something I'm used to having to do. Neat! :)

Controller and View Discovery

I prefer to group project content by feature instead of in folders named Controllers, Models and Views, but doing that in MVC 5 means you have to tell the framework where to find controllers. Not so in MVC 6, which finds them wherever they are without fuss. Nice! The same unfortunately isn't true of Views, but it's pretty easy to re-configure:

In Startup.cs:

public void ConfigureServices(IServiceCollection services)
{    services.Configure<RazorViewEngineOptions>(options =>{options.ViewLocationExpanders.Add(new ViewLocationExpander());});

...and in the ViewLocationExpander:

public class ViewLocationExpander : IViewLocationExpander {public IEnumerable<string> ExpandViewLocations(ViewLocationExpanderContext context,IEnumerable<string> viewLocations)
    {return new[]
        {"/Home/{0}.cshtml","/Contact/{0}.cshtml"}.Concat(viewLocations).ToArray();}

...you simply return an enumerable of strings containing possible View locations. That's it!

Overall I really enjoyed working with ASP.NET Core and MVC 6, and I look forward to putting it to work on a more complex project in future.

Viewing all 6441 articles
Browse latest View live