Quantcast
Viewing all 6441 articles
Browse latest View live

Introducing Cadru 2.0

Originally posted on: http://geekswithblogs.net/sdorman/archive/2015/11/10/introducing-cadru-2.0.aspx

Just over two years ago, Cadru was released as an open source framework. Since then, there have been numerous changes, improvements, and updates. This latest update includes a lot of new features and, unfortunately, one breaking change.

First up, the breaking change.

In an earlier release, ToRelativeTimeString and ToRelativeDateString methods were added that included a formatting options enum named RelativeDateFormattingOptions. The problem is that this was a plural name for a non-flags enum and it really didn’t portray the meaning of the enum very well. This has been renamed to just RelativeDateFormatting.

Now, for the new features.

A new extension method on IEnumerable has been added called Partition. This takes an IEnumerable and breaks it up into smaller collections of equal size. If there aren’t enough elements to fully populate the final partition, it’s the size of the number of elements.

int[] numbers = { 0, 30, 20, 15, 90, 85, 40, 75 }; 
var partitions = numbers.Partition(3).ToArray(); 

// partitions[0] = { 0, 30, 20 }
// partitions[1] = { 15, 90, 85 }
// partitions[2] = { 40, 75 }

Next is a simple ReverseComparisonComparer. This takes a given Comparison<T> instance and reverses the Compare operation.

The code contracts classes, Requires and Assumes, also gained a method to test if a parameter is of a given type and one to test if it’s an enum.

The Cadru.UnitTest.Framework library also picked up a change by adding a WithInnerException method on ExceptionAssert. This allows you to assert that the inner exception of a thrown exception is a specific exception type.

Enumerated types also gained some new methods. Cadru always had extension methods on enums, one of which was GetDescription, which would return the value of an EnumDescription attribute applied to an enum member. When the strongly typed Enum<T> class was introduced, it was purely a strongly typed pass-through for the methods exposed by the standard Enum class. With this latest release, Enum<T> gains GetDescription and GetDescriptions. In addition, to keep things consistent, the GetDescription extension method has been updated to behave in the same way as Enum<T>.GetDescription, specifically, if an EnumDescription attribute does not exist, it returns null. To keep this from being a breaking change, this behavior was introduced as a new overload.

Finally, and probably the biggest addition to this release, is support for Ranges, through the Range<T> class. This allows easy creation of a range of values. Since it’s a generic class, you can create a range over pretty much any data type. Ranges are similar to a mathematical interval and allow you to include or exclude either endpoint (using standard interval notation), provide intersection and union operations, and be enumerable using a default enumeration function or a custom function. (I will describe ranges in much more detail in a separate blog post.)

var range = new Range<char>('a', 'e', RangeEndpointOption.Closed);
range.SetDefaultEnumerator();
var expanded = range.ToList();

// expanded = {'b', 'c', 'd' }

Unit tests and code coverage

Ever since Cadru was first created (long before it became an open source framework), it always had good unit tests and code coverage. I’m very happy that tradition has continued. Even with all of the new changes (about 26 files spread over 15 different commits), I’m still at 89.49% code coverage for the entire library (all 4 projects) and 98.41% code coverage for Cadru.Core where almost all of these changes took place (according to the analysis tools built in to Visual Studio).

Be sure to check out the newest release on NuGet.

Bugs and feature requests

Do you have a bug or a feature request? Please use the issue tracker and search for existing and closed issues. If your problem or request isn't addressed yet, go ahead and open a new issue.

Contributing

You can also get involved and fork the repository to submit your own pull requests. (More detailed contributor guidelines will be available soon.)

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Coexistence between Exchange forests (without trusts…) -- Part 8: Enabling Provisioning

Originally posted on: http://geekswithblogs.net/marcde/archive/2015/11/11/coexistence-between-exchange-forests-without-trustshellip-----part-8.aspx

?

Step 1: Open the “Synchronization Service Manager”.

Step 2: Open the Options from the Tools menu.

Step 3: Under Metaverse Rules Extensions, verify that Enable metaverse rules extentions box is checked.

Step 4: Verify that the Enable Provisioning Rules Extension box is checked.

Step 5: Click “OK”.

Image may be NSFW.
Clik here to view.
ArrowGreen
Coexistence between Exchange forests (without trusts…)  -- Part 7: Creating Synchronization Agents
Image may be NSFW.
Clik here to view.
ArrowGreen
Coexistence between Exchange forests (without trusts…)  -- Part 9: Synchronization!
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Using Visual Studio Online's new build system to achieve a Continuous Delivery pipeline Presentation

Originally posted on: http://geekswithblogs.net/Aligned/archive/2015/11/11/using-visual-studio-onlines-new-build-system-to-achieve-a.aspx

I had the honor and pleasure of giving my first presentation at an event at South Dakota Code Camp last weekend on November 7th. Thanks to the organizers and sponsors, I’m looking forward to next year already.

I talked about the importance of Continuous Delivery in today’s development practices and how it helps us get quality code to our users quicker and more reliably using automated processes. Microsoft’s build system has improved immensely in the last year and one of my goals was to show what I’ve learned, what’s possible and prompt them to try it out themselves.

1. Microsoft has greatly improved their build system.
2. You can get started quickly, but there is a lot of depth to dig into.
3. Visual Studio Online gets the latest code, then it is rolled out to on premise TFS installs.
4. You can build non-Microsoft technologies with it and run NodeJs NPM, Bower and Grunt/Gulp tasks in it. iOs, Make, Maven, Java, Ant and more.

I’ve added some code to my GitHub project and added a lot of notes and links in the readme.

Here’s the presentation: https://prezi.com/yxubm04sgqqu/using-visual-studio-onlines-new-build-system-to-achieve-a-c/

The code has some approaches I’ve learned about writing Selenium tests to do automated UI testing of a web project

I did run into some snags trying to get the Selenium tests running in the Azure build setup, but I’ll keep digging into it. I have a lot to learn about WinRM remote Powershell, setting up trusted hosts and configuring things. Thankfully, the build configuration web UI is much simpler. I tried following this

Here are a few more links, but see my readme for a lot more:

Forrester Reasearch on DevOps
TFS build 2015 first look on Pluralsight

Image may be NSFW.
Clik here to view.
22326396553_82e2c8c720_z

https://www.flickr.com/photos/136778967@N05/22326396553/

I you need a presenter or are interested on the subject, please send me a note!

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Is This A CPU Bug?

Originally posted on: http://geekswithblogs.net/akraus1/archive/2015/11/12/168683.aspx

I see a lot of different code and issues. One interesting bug was where someone did remove a few lines of code but the regression test suite did consistently report a 100ms slowdown. Luckily the regression test suite was using ETW by default so I could compare the good baseline with the bad one and I could also take a look at the code change. The profiling diff did not make much sense since there was a slowdown but for no apparent reason in the CultureInfo.CurrentCulture.DisplayName property did become ca. 100ms slower.

How can that be? To make things even more mysterious when they changed some other unrelated code the numbers did return back to normal. After looking into it more deeply I found that the basic application logic did not slow down. But instead some unrelated methods did just become much more CPU hungry. The methods were internal CLR methods named COMInterlocked::CompareExchange64 and COMInterlocked::CompareExchange64. The interesting thing is that it happened only under 32 bit and under 64 bit the error did go away. If you are totally confused by now you are in good company. But there is hope. I had a similar problem encountered already over a year ago. I knew therefore that it has something to do with the interlocked intrinsics for 64 bit operands in 32 bit code. The most prominent on 32 bit is

lock cmpxchg8b qword ptr [some register pointing to a memory location] 

which is heavily used by the clr interlocked methods. To reproduce the problem cleanly I have written a little C program where I played a bit around to see what the real issue is. It turns out it is ……

                                                                                Memory Alignment

A picture will tell more than 1000 words:

Image may be NSFW.
Clik here to view.
image

The CPU cache is organized in cache lines which are usually 64 wide. You can find out the cache line size of your CPU with the nice Sysinternals tools coreinfo. On my Haswell home machine it prints something like this

Logical Processor to Cache Map:
**------  Data Cache          0, Level 1,   32 KB, Assoc   8, LineSize  64
**------  Instruction Cache   0, Level 1,   32 KB, Assoc   8, LineSize  64
**------  Unified Cache       0, Level 2,  256 KB, Assoc   8, LineSize  64
********  Unified Cache       1, Level 3,    8 MB, Assoc  16, LineSize  64

The most important number for the following is the LineSize of 64 which tells us how big the smallest memory unit is which is managed by the CPU cache controller. Now back to our slow lockcmpxchg8b instruction. The effect of the lock prefix is that one core gets exclusive access to a memory location. This is usually implemented on the CPU by locking one cache line which is quite fast. But what happens if the variable spans two cache lines? In that case the CPU seems to lock all cache lines which is much more expensive. The effect is that it is at least 10-20 times slower than before. It seems that our .NET application in x86 did allocate a 64 bit variable on a 4 byte (int32) boundary at an address that crossed two cache lines (see picture above). If by bad luck we are using variable 7 for a 64 bit interlocked operation we will cause an expensive global cache lock.

Since under 64 bit the class layout is usually 8 byte aligned we are practically never experiencing variables which are spanning two cache lines which makes all cache line related errors go away and our application was working as expected under 64 bit. The issue is still there but the class layout makes it much harder to get into this situation. But under 32 bit we can frequently find data structures with 4 byte alignment which can cause sudden slowdowns if the memory location we are hitting is sitting on a cache line boundary. Now it is easy to write a repro for the issue:

using System;using System.Diagnostics;using System.Globalization;namespace InterlockedFun
{class Program
    {staticvoid Main(string[] args)
        {int len = 1;if (args.Length == 1)
            {
                len = int.Parse(args[0]);
            }
            var b = newbyte[len];

            var sw = Stopwatch.StartNew();

            var name = CultureInfo.CurrentCulture.DisplayName;

            sw.Stop();
            Console.WriteLine("DisplayName property did take {0:F0}ms", sw.Elapsed.TotalMilliseconds);
        }
    }
}

That is all. You only need to allocate on the managed heap enough data so the other data structures will at some point hit a cache line boundary. To force this you can try different byte counts with a simple for loop on the command line:

for /L %I in (1,1,64) do InterlockedFun.exe %i

At some point the measured times will change quite a lot:

InterlockedFun.exe 29
DisplayName property did take 17ms

InterlockedFun.exe 30
DisplayName property did take 17ms

InterlockedFun.exe 31
DisplayName property did take 17ms

InterlockedFun.exe 32
DisplayName property did take 17ms

InterlockedFun.exe 33
DisplayName property did take 128ms

InterlockedFun.exe 34
DisplayName property did take 93ms

InterlockedFun.exe 35
DisplayName property did take 77ms

You can play with the little sample for yourself to find the worst performing version on your machine. If you now look at WPA with a differential view you will find that CompareExchange64 is responsible for the measured difference:

Image may be NSFW.
Clik here to view.
image

Since that was a such a nice problem here is the actual C Code I did use to verify that the issue only pops up only at cache line boundaries:

#include "stdafx.h"
#include <windows.h>
#include <chrono>constint Iterations = 1000 * 1000;  // yeah heavy locking
size_t  Alignment = 4; // simulate DWORD alignmentint main(int argc, char **argv)
{if (argc == 2)
    {
        Alignment = atoi(argv[1]);
        _tprintf(L"Alignment: %I64d", Alignment);
    }

    auto pAligned = (LONG64 *)_aligned_malloc(10 * sizeof(LONG64), 64);

    auto v1 = (LONG64 *) (((byte *)pAligned) + Alignment)+7; // Now shift our 64 byte cache line aligned variable by 4 bytes and then go 7 // int64 to the right to land on the border of two cache lines

    auto start = std::chrono::high_resolution_clock::now();

    for (int k = 0; k < Iterations; k++)  // simulate many interlocked operations on a variable which crosses two cache lines
    {
        _InterlockedCompareExchange64(v1, 100, 100);
    }
    auto stop = std::chrono::high_resolution_clock::now();
    auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(stop - start).count();
    _tprintf(L"\nInterlocked %d iterations did take %I64dms. Average duration interlocked operation: %f us", Iterations,  ms, (ms*1000.0f)/Iterations);

    _aligned_free(pAligned);
    return 0;
}

This will print with bad 4 byte alignment

Interlocked 1000000 iterations did take 1104ms. Average duration interlocked operation: 1.104000 us

but with 8 byte alignment

Interlocked 1000000 iterations did take 26ms. Average duration interlocked operation: 0.026000 us

That is a whooping factor of 42 faster. No wonder that the intel manual recommends to align the variables on page 258 of the 64 ia 32 architectures software developer system programing manual:

… The integrity of a bus lock is not affected by the alignment of the memory field. The LOCK semantics are followed
for as many bus cycles as necessary to update the entire operand. However, it is recommend that locked accesses
be aligned on their natural boundaries for better system performance:
• Any boundary for an 8-bit access (locked or otherwise).
• 16-bit boundary for locked word accesses.
• 32-bit boundary for locked doubleword accesses.
64-bit boundary for locked quadword accesses.  …

The word better should be written in big red letters. Unfortunately it seems that 32 bit code has a much high probability to cause random performance issues in real world applications than 64 bit code due to the memory layout of some data structures. This is not an issue which makes only your application slower. If you execute the C version concurrently

start  cmpxchg.exe && cmpxchg.exe

Then you will get not 1s but 1,5s of runtime because of the processor bus locking. In reality it is not as bad as this test suggests because if the other application uses correctly aligned variables they will operate at normal speed. But if two applications exhibit the same error they will slow each other down.

If you use an allocator which does not care about natural variable alignment rules such as the GC allocator you can run into issues which can be pretty hard to find. 64 bit code can also be plagued by such issues because we have also 128 bit interlocked intrinsics. With the AVX2 SIMD extensions memory alignment is becoming mainstream again. If people tell you that today memory alignment and CPU caches play no role in todays high level programing languages you can prove them wrong with a simple 8 line C# application. To come to an end and to answer the question of the headline: No it is not a CPU bug but an important detail how the CPU performance is affected if you use interlocked intrinsics on variables which span more than one cache line. Performance is an implementation detail. To find out how bad it gets you need to measure for yourself in your scenario.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

2015 Harrisburg .Net Code Camp

Originally posted on: http://geekswithblogs.net/raysmithequip/archive/2015/11/14/168722.aspx

With about 100 or so signed up already, it looks like we will be having another great Code Camp!!

If I am lucky I will be able to duck into a couple of sessions!  Over the years I have got know some of the speakers and all I can say is they are ALL good!  

Lance Wulfers has found us yet another code camp sponser, IBM!!  In addition to providing us coffee and donuts, they have some special offers for the attendees...

A Free e-book on IBM Bluemix

http://www.redbooks.ibm.com/abstracts/redp5242.html

6 Month Free Trial of IBM Bluemix

www.surveymonkey.com/r/bluemixPenn

I will be back to thank all the other sponsors, including ITT where the venue is held yearly, TEK Systems, and all the rest.  I wish I could comment on INETA evaporating, but there is not much to say.  We do miss them already.  Although 2015 does not start for another few hours, I am already thinking ahead to Code Camp 2016.  So far this year went pretty darn well, and the groups members have stepped up well while I slacked off a little bit this past couple of months.  ITT Tech has been wonderful and I am looking forward to seeing the results of the renovations.  More to come after the event, be there or be stuck looking at a page prompt.....



Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Compatibility Problem with Microsoft Test Manager 2010 and Visual Studio 2011

Originally posted on: http://geekswithblogs.net/jakob/archive/2015/11/15/compatibility-problem-with-microsoft-test-manager-2010-and-visual-studio.aspx

UPDATE 10.01.2012:

The issue has been resolved by Microsoft and will be addressed in patch soon. Here is the full description from the Connect site:


“We've identified the rootcause. This bug was introduced in the compatibility GDR patch released for VS 2010 to work against 2011 TFS Server. We shall be releasing a patch soon. Till then, please follow the workaround mentioned to unblock yourselves. “

When setting up a physical environment for a new test controller on our TFS 2010 server, I ran into a problem that seems to be related to having installed the Visual Studio 2010 SP1 TFS Compatibility GDR and/or Visual Studio 2011 Developer Preview

on the same machine as Visual Studio 2010 (SP1)

 

The problem occurs when trying to add a test agent to the physical environment, MTM gives the following error:


Failed to obtain available machines from the selected test controller.


Clicking on the View details link shows the following error dialog:

Image may be NSFW.
Clik here to view.
image


Error dialog: Cannot communicate with the Controller due to version mismatch

 


I have investigated the problem together with Microsoft, and they are working on finding out why this is happening. I have posted the issue on the Connect site here:
https://connect.microsoft.com/VisualStudio/feedback/details/712290/microsoft-test-manager-2010-can-not-communicate-with-test-controllers-when-visual-studio-11-is-installed-on-the-same-machine

 

Workaround

Fortunately, we found a workaround that is not too bad. When facing this problem, go the the Controllers tab that list all the controllers. If you select the controller from the list, it will actually show the test agent.

 

Image may be NSFW.
Clik here to view.
image

Then go back to the Environments tab and voila, the test agent appears now on the list. It seems like the

I’ll post an update when the issue has been resolved by MS

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

TFS 2010 Inmeta Build Explorer

Originally posted on: http://geekswithblogs.net/jakob/archive/2015/11/15/tfs-2010-inmeta-build-explorer.aspx

This weekend we at Inmeta release a free Visual Studio 2010 Team Explorer extensions that solves the problem with the Builds node in the Team Explorer not being hierarchic. For some reason, this part of the Team Explorer didn’t get the nice hierarchical folder structure that the Work items node got in 2010. The result is that, for a company that has several hundreds of builds in the same team project, it becomes very hard to navigate.

The solution that we implemented is very simple and uses a naming convention to group the build definitions in folders. The default separator is ‘.’ (dot) which is prabably the most common convention used anyway. As it turns out, Microsoft DevDiv uses this convention internally, as posted
by Brian Harry. And they have a _lot_ of build definitions…. Image may be NSFW.
Clik here to view.
Ler


This is what the build explorer looks like:

Image may be NSFW.
Clik here to view.

 

As you can see, if you have a multi-part name, such as Inmeta.TFS Exception Reporter.Production, you get two folders in the hierarchy.


The Build Explorer is available in the Visual Studio Gallery, either download it from http://visualstudiogallery.msdn.microsoft.com/35daa606-4917-43c4-98ab-38632d9dbd45, or use the Visual Studio Extension Manager directly (search for Inmeta):

Image may be NSFW.
Clik here to view.
image

 

The extension was developed mostly by Lars Nilsson, with some smaller additions by myself and Terje Sandström.

The source code is available at http://tfsbuildfolders.codeplex.com. Let us know what you think and if you want to contribute, contact me or Terje at the Codeplex site.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Working with Build Definitions in TFS Team Build 2010

Originally posted on: http://geekswithblogs.net/jakob/archive/2015/11/15/working-with-build-definitions-in-tfs-team-build-2010.aspx

Disclaimer: This blog post discusses features in the TFS 2010 Beta 1 release. Some of these  features might be changed in the RTM release.

In my last post I talked about the new major features of Team Build in TFS 2010. This time, I will go into more detail on how you work with build definitions. In TFS 2010, the whole build process is now implemented on top of Windows Workflow Foundation 4.0 (WF4). This means that everything that has to do with creating and customizing builds in TFS 2010 is now done using a workflow designer UI. This means that you no longer have to remember all the different MSBuild targets when you want to insert some custom logic in your build. On the other hand, you obviously need to understand how a default team build process is implemented, which activities does what, what WF properties and variables that exist. And eventually you might also have to learn how to implement custom workflow activites when you need more functionality than what is included in the standard team build activities.

Note that MSBuild is still used to actually compile all the projects. The output from the compilations are available in a separate log file that is available from the build summary view.

 

So, lets create a new build definition. When you select the New Build Definition menu item, you get a dialog that looks very much like the one in TFS 2008.

 

General
This tab just contains the name and the description of your build. There is also a checkbox that lets you disable the build definition, in case you want to work on it more before making it enabled.

Image may be NSFW.
Clik here to view.
image

Trigger
Here you define how this build should be queued. The only new option here in 2010 is Gated Check-in, which is a very cool feature that will stop you from check in in anything that breaks the build.

 

Image may be NSFW.
Clik here to view.
image

Workspace
This tab has not changed since 2008. Here you define the workspace for the build, i.e. what part of the source control tree that should be downloaded as part of the build. Here I set the $/Demo/WpfApplication1 as my workspace root. You always want to make your workspace as small as possible to speed up build time.

Image may be NSFW.
Clik here to view.
image

 

Build Defaults
In the previous version of Team Build you select which build agent that should run the build. In 2010, you now select a Build Controller. The build controller manages a pool of build agents that will be selected by an algorithm that takes into account the queue length on each build agent, in a round-robin fashion (although this algorithm is not yet documented, and it is not clear if you can implement your own algorithm)

In addition to must enter the drop location for the build.

 

Image may be NSFW.
Clik here to view.
image

Process

 

Now we come to the interesting part! Here you select the Build process file, which is a Windows Workflow XAML file that must be located somewhere in your TFS source control repository. By default for all new team projects, there are two build process files created automatically, DefaultTemplate and UpgradeTemplate. The default template is the standard Team Build process, with the get, label, compile etc.. The UpgradeTemplate process file can be used to execute legacy builds, i.e. TFSBuild.proj files.  

Image may be NSFW.
Clik here to view.
image

 

This functionality, e.g. selecting a build process template from a list, is in itself a nice improvement from earlier versions where you always had to create a standard build process and the modify the TFSBuild.proj accordingly. (Lots of people instead wrote applications that create TFSBuild.proj programattically to simplify the process).

However, you should not use the default template as the process file for your builds. Instead you should create a new template from the default template and use this one instead. You do this by clicking on the New button:

Image may be NSFW.
Clik here to view.
image

This mechanism lets you create a set of build process templates (for example you can have one template for CI builds, one for nightly builds, one for relase builds etc… These templates can be stored in a dedicated location in source control and any changes to them should only be allowed for the build managers. Application developers can then setup new builds from the existing templates and should only need to modify the parameters (see below) which are not part of the template but stored together with the build definition.

 

You can view and/or edit the build process file by clicking the link which takes you to the source control explorer, then double-click the xaml file to open it up in the workflow designer. The following (slightly MSPaint hacked) screen shot show you the top level process of the DefaultTemplate build process:

 

Image may be NSFW.
Clik here to view.
image

You can drill-down into the different activities to see how the process is designed. In my next post I will show how to customize the build process by adding new activities to it.

 

When you have selected the build process template, you then go through the parameters of the build. The properties are defined in the build process as arguments to the workflow and corresponds to the MSBuild properties in the previous versions. If you have used team build before, you’ll definitely recognize many of the properties. The most important ones are:

 

Build Process ParameterMeaningSample
Projects to BuildThe list of build projects 
Configurations to BuildThe list of configurations to build, on the format configuration|platformDebug|Any CPU, Release|Any CPU
Build Number FormatThe format of the unique build number that is generated for each build$(BuildDefinitionName)_$(Date:yyyyMMdd)$(Rev:.r)
Clean WorkspaceControls what artifacts that should be deleted before the build starts. All– Deletes both sources and outputs (Full rebuild)
Outputs– Deletes outputs, and get only the sources that have changed (Incremental Get)
None = Leave existing outputs and sources in place (Incremental Build)
MSBuild ArgumentsAdditional command line arguments to pass to MSBuild.exe. /p:Configuration=Debug
Associate Changesets and Work ItemsControl if Team Build should associate changesets and work items with the buildTrue/False. Consider False for continuous builds to speed them up.

 

Retention Policy
In this tab you select how builds should be retained. Note that you now can select a different configuration for manual/triggered builds and private build.Private here means builds with the Gated Check-in trigger enabled. You will typically want to retain fewer private builds compared with the manual/triggered builds:

 

Image may be NSFW.
Clik here to view.
image

Ok, you are done! Save the build definition and queue a build in the team explorer. When the build finishes, double click it to see the Build summary view:

 

Image may be NSFW.
Clik here to view.
image

 

 

 

 

 

For a detailed view of the build, click the View Log link:

Image may be NSFW.
Clik here to view.
image

 

A nice feature here is the Show Property Values link. This show the log, but in addiotn it shows each in/out property for each activity. This is very useful when trying to troubleshoot a failing build:

Image may be NSFW.
Clik here to view.
image

 

OK, this was a quick walkthrough of how to create a basic build definition in Team Build 2010. In my next post, I will show how to customize the build process using the workflow designer!

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Writing a Code Coverage Checkin Policy

Originally posted on: http://geekswithblogs.net/jakob/archive/2015/11/15/writing-a-code-coverage-checkin-policy.aspx

The source code for this policy is available here : http://www.codeplex.com/TFSCCCheckinPolicy

Checkin policies is a great tool in TFS for keeping your code base clean and adhering to your companhy standards and policies.  The checkin policies that are included are very useful, but don’t stop there! Implementing your own custom checkin policy is pretty straight-forward and can soon pay off by stopping people from doing silly things (on purpose or not…).

At our company (Osiris Data) we have developed several small checkin policies that both stop people from breaking our standards, but also helping them to do the right thing. We all make mistakes from time to time, and if a tool can help us not doing them, then that’s pretty good… :-)

For example we have a checkin policy that stop people from checking in binaries into TFS. Of course there are occasions when people are allowed to do this (3rd party dll:s, binary references), so then we check that the binaries are placed in folders that are named according to our naming policies, thereby enforcing standards across the team projects.


I recently saw a post in one of the MSDN forums asking for a checkin policy that would check coverage as part of a check-in. That is, if the latest test run either does not have code coverage at all, or the total code coverage percentage is below a certain treshold, the policy would stop the check-in. I couldn’t find any such checkin policy on the net, so I decided that it would be fun to write one.

 
The following things must be solved:

1) Locating the latest test run and code coverage information
2) Analyzing the code coverage information


The first part was simple to implement, unfortunately there does not seem to be anything in the VS.NET extensibility API that allows you to locate the test runs or code coverage information, so I basically had to run through the folder structure beneath the current solution to locate the folder with the latest test run. Simple and rather boring, so I won’t mention that code here.


The second part was a bit worse, since the API for running and analysing code coverage is totally undocumented and, frankly, not supported by MS. However, the following blog post by Joe contained the information I needed in order to load and analyse the code coverage information. As always with unsupported stuff, there is no guarantee that the code will work with new versions of VSTS or even service packs. This code has been tested on VSTS 2008 SP1.

The code coverage result is stored in a proprietary binary format, and is located beneath the test run result. the local folder structure looks like this:

 
Solution
    -----  TestResults
                   ---- TestRun1
                              -----  In
                                       ------ data.coverage
                             ------ Out
                                       ------ Binaries from the instrumented assemblies


To programmatically access and analyse the code coverage results, we need a reference to the Microsoft.VisualStudio.Coverage.Analysis assembly, which is located in the private assemblies folder of VSTS. In this assembly, we use the CoverageInfoManager class to load the coverage file. In addition this class contains a method that returns a typed dataset (method is appropriately called BuildDataSet). This method returns an instance of the CoverageInfo class from which we can easily read the information.

The code snippet for loading the coverage file calculating the total code coverage in percent looks like this:

CoverageInfoManager.ExePath = binariesFolder;
CoverageInfoManager.SymPath = binariesFolder;
 
CoverageInfo ci = CoverageInfoManager.CreateInfoFromFile(codeCoverageFile);
CoverageDS data = ci.BuildDataSet(null);
 
uint blocksCovered = 0;
uint blocksNotCovered = 0;
foreach (CoverageDS.ModuleRow m in data.Module)
{
    blocksCovered += m.BlocksCovered;
    blocksNotCovered += m.BlocksNotCovered;
}
 
return GetPercentCoverage(blocksCovered, blocksNotCovered);

Note that we must set the ExePath and the SymPath properties to the folder where the instrumented assemblies is located. If not, the BuildDataSet method will throw a CoverageException.

 
So all we have to do then is to implement the PolicyBase.Evaluate method and compare the totalCodeCoverage with the configurable treshold. This treshold is configured by implementing the CanEdit and the Edit methods. See the source code for how this is done, it is all standard checkin policy stuff.


Hopefully this checkin policy will be useful for some people, let me know about any problems and I will try to fix them asap.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

ADOM now on Steam, a review.

Originally posted on: http://geekswithblogs.net/cwilliams/archive/2015/11/17/168820.aspx

It's hard to believe that I've been playing this game for nearly 20 years.  My favorite game (ever) has just released on Steam this week.  If you like RPGs and/or Roguelike Games, you owe it to yourself to check out Ancient Domains of Mystery (ADOM) on Steam.  Since this is release week, it's on sale. 

Image may be NSFW.
Clik here to view.

Get It Here:   http://store.steampowered.com/app/333300

If you aren't familiar with ADOM, it's a roguelike game. Maybe you've heard of Rogue, Dwarf Fortress, Nethack, Heroic Adventure!, etc...?  It's best described as a Tactical-RPG where death is permanent and all items are randomized when a new game is created. Discovery is a huge part of a game like this, and there is plenty to discover here. Villages, Magic, Story, Questlines... ADOM has it all, and is one of the original roguelikes.

You may be thinking, how is this a roguelike... it's too pretty. It's true, ADOM has gotten a fresh coat of paint, along with some necessary engine overhauls to make it work in the Steam ecosphere, but it's still very much the same game at heart.  In fact, ASCII mode is only a couple of clicks away if that's how you roll. (Though I have to admit, the new interface is really sweet.)

I've only gotten a few games in since release yesterday, but I can attest to many, many late (all) nighters with this game.  There's a certain threshold that once you get past it, in terms of progress, you really don't want to stop, for fear of angering the RNG gods.

If you're looking for a fun, but brutal, RPG... you owe it to yourself to pick this one up.


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Coexistence between Exchange forests (without trusts…) -- Part 9: Synchronization!

Originally posted on: http://geekswithblogs.net/marcde/archive/2015/11/18/coexistence-between-exchange-forests-without-trustshellip-----part-9.aspx

? Note: The entire list must be run through in order and progression to the next step cannot be done until the current step has completed for all Management agents!

Step 1: Right click the management agent, select “Run” and “Full Import (Staging Only)”.

Step 2: Right click the management agent, select “Run” and “Full Synchronization”.

Step 3: Right click the management agent, select “Run” and “Export”.

Step 4: Right click the management agent, select “Run” and “Delta Import”.

? Note: Step 4 (Delta Import) is to confirm the export was successful.

? Note: It is recommended to verify the contact objects were created in all Active Directory forests at this point.

 

Create a scheduled task to run hourly synchronization

? Note: This is required to keep both GALs up to date. If this hourly (or regular) synchronizations are not performed, eventually, the GALs will no longer represent accurate information.

? Note: The code for the synchronization scripts is located in the references chapter. No warranty is given on this code by the author of this document or Microsoft and it should be reviewed, as well as tested, before being placed in production environments.

Step 1: Open task scheduler (administrative tools > Task Scheduler).

Step 2: In the Actions Pane, click on Create Basic Task….

Step 3: Enter a Name and Description.

Step 4: As Trigger, use Daily.

Step 5: Have the task recur every day and select a start time.
Image may be NSFW.
Clik here to view.
image

Step 6: Use Start a program as action.
Image may be NSFW.
Clik here to view.
image

Step 7: Enter powershell in the Program/script field and –command .\start-sync.ps1 in the add arguments field. In the start in field, enter the directory where the script is located.
Image may be NSFW.
Clik here to view.
image

Step 8: In the Finish pane, tick the box next to Open the Properties dialog for this task when I click finish and click Finish.
Image may be NSFW.
Clik here to view.
image

Step 9: In the properties of the newly created task, click on the Triggers tab.

Step 10: In the Triggers tab, click the edit button to adapt the schedule of the task.
Image may be NSFW.
Clik here to view.
image

Step 11: In the Edit Trigger window, tick the box next to Repeat task every: and select 1 hour from the drop down box. Click on the drop down box next to for a duration of and select Indefinitely. Click OK.
Image may be NSFW.
Clik here to view.
image

Step 12: Click on Task Scheduler Library, select your task, right click it and select Run to test the execution of the task. A PowerShell window will open and show the progression through each synchronization step.
Image may be NSFW.
Clik here to view.
image

 

Image may be NSFW.
Clik here to view.
ArrowGreen

Coexistence between Exchange forests (without trusts…)  -- Part 8: Enabling Provisioning

Image may be NSFW.
Clik here to view.
ArrowGreen
Coexistence between Exchange forests (without trusts…)  -- Part 10: Configuring Free/Busy
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Microsoft Announces Next Generation of Visual Studio Release Management

Originally posted on: http://geekswithblogs.net/jakob/archive/2015/11/18/microsoft-announces-next-generation-of-visual-studio-release-management.aspx

Today at the Microsoft Connect() event, Microsoft announced the public preview of the brand new version of Visual Studio Release Management. The public preview is available on Visual Studio Team Services (a.k.a. Visual Studio Online, in case you missed that announcement! :-)), and will debut on premise later in 2016.

 

So, what’s this new version about? Let’s summarize some of the major features about it:


Web Based

The existing version of Visual Studio Release Management, that was originally acquired from InCycle back in 2013, uses a standalone WPF client for authoring, triggering and tracking releases. It always felt a bit awkward and wasn’t really integrated with the rest of TFS. The new version is completely rewritten to be a web based experience and is part of the web access, as a new “Release” tab.

Image may be NSFW.
Clik here to view.
image

From this hub you can author release definitions, manage approval workflows and trigger and track releases.

 

Shared Infrastructure with TFS Build

With the new build system in TFS 2015, Microsoft already has a great automation platform that is scriptable, cross platform and easy to deploy and configure. So it makes sense that the new version of Visual Studio Release Management is build upon the same platform. So the build agent that is used for running builds can also be used for executing releases.

It also means that all the new build tasks that are available in TFS Build 2015 and also be used as part of a release pipeline.

Image may be NSFW.
Clik here to view.
image

 

Cross Platform Support

As mentioned above, since the same agent is used for releases, it means that we can also run them on Linux and OS/X since these are supported platforms. There are many tasks out of the box for doing cross platform deployment, including Chef and Docker.

Image may be NSFW.
Clik here to view.
image

 

Track Releases across Environments

The new web UI makes it easy to get an overview of the status of your existing environments, and which version of which application that is currently deployed. In the example below we can see that the new release of the “QuizBox” application has been deployed to Dev and QA, has gone through automated and manual acceptance tests, and is currently being deployed to the staging slot of the production environment.

Image may be NSFW.
Clik here to view.
image

 

Configuration Management

One of the biggest challenges with doing staged deployments is the configuration management. The different environment often have different configuration settings, things like connection strings, account names and passwords. In Visual Studio Release Management vNext these configuration variables can be authored either on the environment level or on the release definition level, where it applies to all environments.

We can easily compare the configuration variables across our environments, as shown below.

Image may be NSFW.
Clik here to view.
image

 

Live Release Log Output

As with the new build system in TFS 2015, VSRM vNext gives you excellent real time logging from the release agent, as the release is executing.

Image may be NSFW.
Clik here to view.
image

 

Release Approval

Every environment in the release pipeline can trigger approvals, either before the deployment starts or after. For example, before we want to deploy a new version of an application to the QA environment, the QA team should be able to approve it to make sure that the environment is ready.

Below you can see a release that has a pending approval. Every approver that should take action will receive a notification email with a link to this page.

Image may be NSFW.
Clik here to view.
image

 

Do you want to learn more?

For the last 6 months, me and my fellow ALM MVP and good friend Mathias Olausson have been busy working on a book that covers among other things this new version of Visual Studio Release Management. The title of the book is Continuous Delivery with Visual Studio ALM 2015, and covers how the process of continuous delivery can be implemented using the Visual Studio 2015 ALM tool suite.

Image may be NSFW.
Clik here to view.


I will write a separate blog post about the book, but here is the description from Amazon:


This book is the authoritative source on implementing Continuous Delivery practices using Microsoft’s Visual Studio and TFS 2015. Microsoft MVP authors Mathias Olausson and Jakob Ehn translate the theory behind this methodology and show step by step how to implement Continuous Delivery in a real world environment.

Building good software is challenging. Building high-quality software on a tight schedule can be close to impossible. Continuous Delivery is an agile and iterative technique that enables developers to deliver solid, working software in every iteration. Continuous delivery practices help IT organizations reduce risk and potentially become as nimble, agile, and innovative as startups.

In this book, you'll learn:

  • What Continuous Delivery is and how to use it to create better software more efficiently using Visual Studio 2015
  • How to use Team Foundation Server 2015 and Visual Studio Online to plan, design, and implement powerful and reliable deployment pipelines
  • Detailed step-by-step instructions for implementing Continuous Delivery on a real project

 

You can find the book at  http://www.amazon.com/Continuous-Delivery-Visual-Studio-2015/dp/1484212738.

We hope that you will find it valuable!

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

New Book – Continuous Delivery with Visual Studio ALM 2015

Originally posted on: http://geekswithblogs.net/jakob/archive/2015/11/18/new-book-ndash-continuous-delivery-with-visual-studio-alm-2015.aspx

With today’s announcement at Microsoft Connect() about the public preview of the next generation of Visual Studio Release Management, it is also time to announce the (imminent) release of a new book that covers among other things this new version of RM.

Me and my fellow ALM MVP Mathias Olausson have been working hard during the last 6 months on this book, using early alpha and beta versions of this brand new version of Visual Studio Release Management. Writing about a changing platform can be rather challenging, and our publisher (Apress) have been very patient with us regarding delays and late changes!


About the book

The book is titled Continuous Delivery with Visual Studio ALM 2015 and is aiming to be a more practical complement to Jez Humble’s seminal Continous Delivery book with a heavy focus of course on how to implement these processes using the Visual Studio ALM platform.

The book discusses the principles and practices around continuous delivery and continuous deployment, including release planning, source control management, build and test automation and deployment pipelines. The book uses a fictive sample application that we use throughout the book as a concrete example on how to go about to implement a continuous delivery workflow on a real application.

We hope that you will find this book useful and valuable!

 

Abstract

This book is the authoritative source on implementing Continuous Delivery practices using Microsoft’s Visual Studio and TFS 2015. Microsoft MVP authors Mathias Olausson and Jakob Ehn translate the theory behind this methodology and show step by step how to implement Continuous Delivery in a real world environment.

Building good software is challenging. Building high-quality software on a tight schedule can be close to impossible. Continuous Delivery is an agile and iterative technique that enables developers to deliver solid, working software in every iteration. Continuous delivery practices help IT organizations reduce risk and potentially become as nimble, agile, and innovative as startups.

In this book, you'll learn:

  • What Continuous Delivery is and how to use it to create better software more efficiently using Visual Studio 2015
  • How to use Team Foundation Server 2015 and Visual Studio Online to plan, design, and implement powerful and reliable deployment pipelines
  • Detailed step-by-step instructions for implementing Continuous Delivery on a real project

 

Table of Content

Chapter 1: Introduction to Continuous Delivery
Chapter 2: Overview of Visual Studio 2015 ALM
Chapter 3: Designing an Application for Continuous Delivery
Chapter 4: Managing the Release Process
Chapter 5: Source Control Management
Chapter 6: PowerShell for Deployment
Chapter 7: Build Automation
Chapter 8: Managing Code Quality
Chapter 9: Continuous Testing
Chapter 10: Building a Deployment Pipeline
Chapter 11: Measure and Learn

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Revive your laptop with SSD

Originally posted on: http://geekswithblogs.net/BlueProbe/archive/2015/11/19/168924.aspx

If you haven’t already done so, replace your laptop’s hard drive with an SSD. Boot speed alone is 5x as fast. I dropped in size because of price, but cloud storage, etc. more that compensates. I picked up a PNY 240GB, 6Gb/s drive from a Black Friday sale. So, now’s the time. Start shopping.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Applying Entity Framework Code First Migrations to Azure Database

Originally posted on: http://geekswithblogs.net/paulp/archive/2015/11/20/168974.aspx

EF Code First Migrations is a great way to manage data model changes in .Net application. When publishing to Azure you can enable migration by checking the box: Execute Code First Migrations on the Publish Settings wizard. This will force migrations run on each application start which may not be OK for some. If there’s seed data in Configuration.cs which is common in dev/testing this will cause it to run on each application restart. To avoid this you can resort to Update-Database command.  In order to deploy migrations to the database of choice use connection string parameter. In the Package Manager Console select project with migrations and run following command:

Update-Database -ConnectionString "{Azure Database Connection String}" -ConnectionProviderName "System.Data.SqlClient"

If you want it scripted outside of the Visual Studio, you can call EF command tool migrate.exe from PowerShell script. Something like this:

& ".\migrate.exe" DataAccess.dll /startUpConfigurationFile=DataAccess.dll.config /connectionString=$ConnectionString  /connectionProviderName="System.Data.SqlClient"

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

On Co-Location, Email, and Face-To-Face Communication

Originally posted on: http://geekswithblogs.net/dlussier/archive/2015/11/20/168972.aspx

I broke my own rule – I tweeted a thought as a controversial statement.

Image may be NSFW.
Clik here to view.
image

From this erupted responses from numerous people from the other point of view (mainly on the co-location piece and whether face-to-face is necessary, not so much defending email). So instead of trying to discuss this in 140 characters, here’s my full train of thought on this.

The tweet was born out of two different events:

1) A tweet by Steve Porter that “email is not a collaboration tool” (I fav’d it).

2) My team had just worked through some design decisions in our team room, where we all sit together.

I was riding a high of team collaboration mixed with crusading angst against email when I combined all that into a single tweet.

Now let me talk through my thought process on all these.

Co-Location is Incredibly Important

When I think of the teams I’ve been on that have succeeded the most, they all share a common theme – they all worked in the same room or the same area (physical impediments (walls, hallways, different floors) are *real* barriers to teams collaborating together). Being co-located had the spin off benefit that the team felt more like a team than a group of employees assigned to the same project – there’s a difference. Eating together, playing board games together, building relationships with each other – all of these happened, and ultimately benefited how we worked together because we were co-located.

There’s a growing thought that employees don’t need to be co-located, and in this wired & connected world it shouldn’t matter where we work. I can’t (and am not trying to) argue with people’s personal experiences where they feel they’ve had success. That’s basically what I’m saying in this post – when I’ve had the most success in projects, co-location has been a factor.

Consider Mob Programming, which is somewhat of the extreme of this position and one that I’m *not* 100% sold on. Mob programming “is very similar to Pair Programming, but the whole team works together on the same "problem" at the same time at the same computer.” That’s business owner, developers, testers…everyone, in one room, working on the same problem. Below is a video of a company that has adopted this as how they work every…single…day.

 

If you talked to them, I’m sure they’d rave about how awesome this has worked for them. And really, when we talk about how to best organize a team, this speaks to the real truth: the best approach as to where team members should work – remote, co-located, or on top of each other all day – is best left to the individuals that make the team to decide.

For me, I will always defer to co-locating with a team I’m a part of than being remote. For me I find communication is best done face to face, with video chatting being at the very least of that. Which brings us to…

Face to Face Is Necessary

I think we all agree email isn’t a collaboration tool and on this point I will argue with anyone. So let’s deal with the face-to-face portion of the tweet.

No, I’m not going to quote the whole “90% of communication is non-verbal” because that opens up a whole new can of worms about how that theory has been debunked and that you can’t really set a number to something like this because of how people are different, etc. etc. But, there is a lot of research into this area that does support that non-verbal cues DO make influence how we communicate and interpret communication. If you’d like to learn more on this, there’s a number of great TED talks on the subject.

I’ve also found that face to face communication removes any interpretation or guesswork that an individual has to do around inferring tone and emotion. Consider this.

“Hi Joe, please come by my office – I need to discuss something with you.”

We’ve probably all gotten an email like this or similar in the past. What thoughts go through your mind when you see this? Without any context this can be read as everything from getting a promotion to getting fired.

Now imagine you’re casually walking down the hallway and your boss passes by and says to you “Hi Joe. Hey, can you please come by my office later? I need to discuss something with you.” Your boss is relaxed, he even smiles when he sees you. He doesn’t seem agitated or concerned. Now what goes through your mind? Probably that you just need to go talk about something with the boss and the thought that you may be in trouble doesn’t even cross your mind.

Seeing someone, hearing the tone in their voice, experiencing their non-verbal cues, all plays in to how we effectively communicate with others, whether we’re receiving or delivering the message.

I have a love/hate with Slack. I’ve had a couple of teams that used it extensively, and for some reason I always seem to get with people who – like me, admittedly – love to bug and poke each other. But sometimes, without understanding the non-verbal piece of communication, those messages can be misinterpreted as being meant-spirited or even cruel. I actually took a self-imposed hiatus from Slack for a bit because it was becoming counter-productive to the blissful communication utopia the platform promises.

I mentioned how effectively communicating plays in when receiving or delivering. Face to face communication allows us to pick up cues about the receiver and what state they’re in. Are they happy, sad, frustrated? Are they in a place where a comment made would hurt them? Without non-verbal cues I have no opportunity to alter my message to ensure its communicated effectively.

Where face to face or video isn’t an option, voice is great as well. I’m also a proponent of using emoticons in emails to convey sentiment – its better than forcing someone to infer what your tone is.

Let’s Wrap Up

Everyone is different – personalities, drivers, needs, wants, etc. Everyone communicates in their own way and has their own preferences. And no solution fits everyone. Mob Programming isn’t for everyone, just like an all-remote team isn’t for everyone either. For me, I will always look for co-locating with my team and doing face-to-face communication as much as possible, because that’s where I’ve seen the  most success personally. If you’re being successful in what you’re doing – great, keep it up! If not, look at yourself, your team, and how you’re working/communicating and see if there’s an opportunity to change things.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

JSE IO for Scala Devs

Originally posted on: http://geekswithblogs.net/JoshReuben/archive/2015/11/20/jse-io-for-scala-devs.aspx

Scala IO is somewhat lacking at this point in time - often requiring a fallback to Java APIs. 
I did not have time to write this out in Scala - this post will contain a few Java snippets. What counts is the concepts - Buffers, Streams, Channels and IO vs NIO vs NIO2. Also, Scala per se' does not natively support a "try with resources' concept - for this use scala-arm https://github.com/jsuereth/scala-arm/blob/master/README.md - ‘for' withresources: <- resource.managed()

Anyhow - enjoy:

I/O Basics

print( ) / println( )

java.io - byte / charstream-based I/O produces+ serializesobjects/deserializes objects + consumes information over physical IOdevice:network connection / memory buffer / disk file

java.nio - buffer / channel-based I/O - complimentary abstraction - open a channel (connection)to an I/O device with a buffer to hold data, then perform IO data operations on the buffer. NIO1 was channel-based I/O. NIO2supports stream-based I/O.

Using the NIO System - The most common I/O device is the disk file - all file channel operations are byte-based. Eg Open a file via aPath from Paths.get() and populate a ByteBuffer.

IOinterfaces:Closeable, DataInput, DataOutput, Externalizable, FileFilter, FilenameFilter, Flushable, ObjectInput, ObjectInputValidation, objectOutput, ObjectStreamConstants, Serializable

IOException – subclasses:FileNotFoundException, SecurityException -requiresa security manager eg for applets.

The IO/NIOPackages

  • java.io -byte / char streamtypes

  • java.nio– buffer types

  • java.nio.channels– channels: open IO connections

  • java.nio.channels.spi– service providers

  • java.nio.charset– encoders, decoders for byte <--> char

  • java.nio.charset.spi - service providers

  • java.nio.file - files

  • java.nio.file.attribute

  • java.nio.file.spi

Buffers

Buffer Class - encapsulates current position (index of next R/W op), limit(index of 1 past the last), and capacity.

methods:

  • array(), arrayOffset(), capacity(), clear(), flip(), hasArray(), hasRemaining(), isDirect(), isReadOnly(), limit(), mark(), position(), remaining(), reset(), rewind()

  • variousget() / put() methods

  • allocate() - allocate a buffer manually

  • wrap() - wrap an array inside a buffer

  • slice() - create a subsequence

derived classes:

  • for different types: ByteBuffer, CharBuffer, DoubleBuffer,FloatBuffer, IntBuffer, LongBuffer, ShortBuffer

  • MappedByteBufferextends ByteBuffer - used to map a file to a buffer.

Streams

2 types of streams: low level byte (binary) and high level char (Unicode)

2ways to close a stream - 1) explicitly call close() in finally, 2)try-with-resources (AutoCloseable)


Predefined StandardIOStreams

System.in / out

java.lang.System contains 3 predefined stream variables: in(InputStream),out,(PrintStream)and err.

These fields are declared as public,static, and final. may be redirected or wrapped within character-based streams.

Reading Console Input - wrapSystem.in

BufferedReader br = new BufferedReader(new InputStreamReader(System.in));

After this statement executes,br is a character-based stream that is linked to the console throughSystem.in.

usereadLine( )to read strings:

    do {      str = br.readLine();    } while(!str.equals("\n"));

Writing Console Output -System.out.writecan output chars – use PrintWriter.print() / println() for string output- System.out referencesPrintStream

    PrintWriter pw = new PrintWriter(System.out, true);    pw.println("This is a string");
The Console Class
  • System.console() -aSingleton convenience class over System.in / System.out - read from and write to the console. implements Flushable.input methods: readLine(), readPassword().

The Stream Classes

  • 4 abstract classes in separate hierarchies.

  • InputStream / OutputStream for byte streams - cannot work directly with Unicode characters.

  • Reader / Writer for character streams.

  • All implement AutoCloseable,Closeable.OutputStream/Writeradditionally implement Flushable.Writerimplements Appendable.

  • Stream Benefits - clean abstraction, composition of the filtered stream classes


The Byte Stream Classes

stream class names: <catagory><InputStream/OutputStream>

Byte Stream Categories
  • Buffered - chunks

  • ByteArray - vanilla

  • Data – support for standard datatypes

  • File

  • Filter

  • Object

  • Piped

  • Print – for console out – output only

  • Pushback – supports one byte unget – input only

  • Sequence – combine 2 streams sequently – input only

Input / Output Abstract Base Classes
  • InputStream – methods: available(), close(), mark(), markSupported(), read(), reset(), skip()

  • OutputStream– methods:close(), flush(), write()

derived classes overide read( ) / write( )


ByteArray Streams
  • ByteArrayInputStream-ctor takes a byte[] as the stream source - not necessary to call close(). if mark()is not called, thenreset()sets stream pointer to start of stream

    String tmp = "abcdefghijklmnopqrstuvwxyz";    byte b[] = tmp.getBytes();    ByteArrayInputStream input1 = new ByteArrayInputStream(b);
  • ByteArrayOutputStream - uses a byte array as the destination - writeTo() convenience method to write the contents of f to test.txt.

ByteArrayOutputStream s = new ByteArrayOutputStream();

byte[] buf = "Hello Ruz".getBytes();

try {

s.write(buf);

...

try(FileOutputStream f = new FileOutputStream("copy.txt")){

s.writeTo(f);

Filter Streams
  • FilterInputStream / FilterOutputStream - wrappers(via ctor) around underlying input or output streams that transparently provide some extended functionality

Buffered Streams
  • BufferedInputStream / BufferedOutputStream. - size of the buffer is passed in ctor.Ballpark - 8192 bytes. Buffering supports moving backward in the stream of the available buffer. use mark() to remember location use reset() to return to it.Use flush()to write to stream.read() to read bytes from the file.

new BufferedInputStream(Files.newInputStream(Paths.get(“x.txt”)))
Pushback Streams
  • PushbackInputStream - extends FilterInputStreamwith a memory buffer formultibyteoperations(improving performance, supports skipping, marking, and resetting of the stream). use pushback() to allow a byte to be read and then returned to the stream -->“peek” . useunread() -pushes back the low-order byte as the next byte returned by a subsequent call to read(). side effect of invalidating mark() - usemarkSupported() to check the stream.

Sequenced Streams
  • SequenceInputStream -use to concatenate multiple InputStreams. When the end of each file is reached, its associated stream is closed.

SequenceInputStream(Enumeration<? extends InputStream> streamEnum)

Print Streams
  • PrintStream

  • provides output capabilitiesfor file handle System.out( a PrintStream)

  • implements Appendable,AutoCloseable,Closeable,Flushable

  • print() / println() - leveragesObject.toString()overrides.

  • printf() - uses the Formatter class

  • format()

Data Streams
  • DataOutputStream / DataInputStream - write / read primitive data to / from a stream. implementsDataOutput / DataInput interfaces - define methods that convert primitive values to or from a sequence of bytes --> easyserialization:convert values of a primitive type into a byte sequence and then writes it to the underlying stream: egvoidwriteDouble(double)

Random Access Files
  • RandomAccessFile - not derived from InputStream or OutputStream. implements DataInput , DataOutput, which define the basic I/O methods.

  • seek() - set the current position of the file pointer within the file

  • setLength() - lengthen or shorten a file.

The Character Stream Classes

stream class names: <catagory><Reader/Writer>


Categories
  • Buffered

  • CharArray– vanilla

  • File

  • Filter

  • InputStream / OutputStream– translators

  • LineNumber– counts lines. Reader only

  • Piped

  • Print– for console out– Writer Only

  • Pushback

  • String

Reader / Writer Abstract baseclasses:
  • Reader - abstractbase.Methods: close(), mark(), markSupported(), read(), ready(), reset(), skip()

  • Writer - abstractbase. also implements Appendable. methods:append(),close(), flush(), write()

derived classes overide read( ) / write( )

File Streams
  • FileReader

try ( FileReader fr = new FileReader("FileReaderDemo.java") ){      while((i = fr.read()) != -1) System.out.print((char) i);
  • FileWriter - will create the file before opening it for output when you create the object. creates a sample buffer of characters by first making aString and then using the getChars() method to extract the character array equivalent.

CharArray Streams
  • CharArrayReader - uses a char[] as the source.

CharArrayReader(chararray [ ], int start, int numChars)

  • CharArrayWriter- uses a char[] as the destination.

Buffered Streams
  • BufferedReader -improves performance by buffering input. Specify buffer size in ctor.

  • BufferedWriter - buffers output.

Pushback Streams
  • PushbackReader -allows one or more characters to be returned to the input stream. Unread() - returns characters to the invoking input stream.

Print Streams
  • PrintWriter - a character-oriented version of PrintStream. works like printf().

FlushableInterface

  • force buffered output to be written to the stream to which the object is attached. flush() causes buffered output to be physically written to the underlying device.

Try With Resources

  • java.lang.AutoCloseableInterface- support for try-with-resources (note: resource declared in the try is implicitly final. can manage multiple resources seperated by semicolon.)

  • java.io.ClosableInterfaceextends Autoclosable.automates closing a resource - close() closes the invoking object, releasing resources. implemented by stream classes. Automatically Closing a File viatry-with-resources:

    try(FileInputStream fin = new FileInputStream(args[0])) {      do {        i = fin.read();        if(i != -1) System.out.print((char) i);      } while(i != -1);    }catch(FileNotFoundException e) {        System.out.println("File Not Found.");    }catch(IOException e) {        System.out.println("An I/O Error Occurred");}

Serialization

  • Write objectstate to a byte stream -->for persistent storage or RMI. Considerations: objects relationships should be DAGs.

  • Serializable interface– implementing class (& all of its subclasses) are serializable. Static fields and fields declared as transientopt out of serialization.

  • Externalizableinterface - Extensible serialization. Methods:readExternal(ObjectInputinStream), writeExternal(ObjectOutput outStream)

  • ObjectOutput interface - extendsDataOutput / AutoCloseable interfaces and supports object serialization. Methods:close(), flush(), write(), writeObject()

  • ObjectOutputStreamclass-ExtendsOutputStream, mplements ObjectOutput. for writing objects to a stream.

  • ObjectInput interface - extendsDataInput , AutoCloseable

  • ObjectInputStream -extendsInputStream , implements ObjectInput

Channel Interface

represents open connection to I/O source / destination. extends Closeable,AutoCloseable.

obtain a channel by calling getChannel() on an object that supports channels:

  • FileinputStream / FileOutputStream

  • RandomAccessFile

  • Socket / ServerSocket / DatagramSocket

  • Files (via staticSeekableByteChannel)

derived interfaces: FileChannel,SocketChannel, SeekableByteChannel

support various read() and write() methods

support additional channel access & control methods

FileChannelClass

get / set current position, transfer information between file channels,get size , lock the channel. provides a static method called open(), which opens a file and returns a channel to it. the map() method, which lets you map a file to a buffer.

Charsets and Selectors

  • A charset - defines the way that bytes are mapped to characters.

  • An encoder / decoder - encode / decode a char sequence into bytes / byte sequence into chars. defined in the java.nio.charset package.

  • A selector - supports key-based, non-blocking, multiplexed I/O - enable you to perform I/O through multiple channels. defined in thejava.nio.channels package. most applicable to socket-backed channels.

NIOManualChannelFile I/O

Manually Read from a File via a Channel
  • Paths.get()specify & opena Path to file. (defaults to RO)

  • establish a channel to file:Files.newByteChannel() - returns a SeekableByteChannelinterface objectcast toFileChannel class ( implementsAutoCloseable).

  • allocate a buffer - used by the channel: wrap an existing array or ByteBuffer.allocate()to allocate dynamically.

  • SeekableByteChannel.read(ByteBuffer buf)- fills buffer to capacity with data from the file. Sequential reads – call repeatedly. returns#bytes read or -1 at EOF (AutoCloseable uses this).

  • load buffer with data from file - ByteBuffer.rewind() - reset to start -->read bytes by ByteBuffer.get().bytes are cast to char so file can be displayed as text. (optionally create a char bufferto encodes bytes as they come in)

  • streamlined sa single trywith resources block calls Paths.get() and newByteChannel()

try (SeekableByteChannel fChan=Files.newByteChannel(Paths.get("x.txt"))){

ByteBuffer mbuf = ByteBuffer.allocate(128);

do {

int count = fChan.read(mbuf);//read from channel into buffer

if (count != -1) {

mbuf.rewind();// so it can be read

for (int i=0; i < count; i++) {

System.out.print((char) mbuf.get());

}

}

}while (count != -1);

ManuallyWrite to a File via a Channel
  • specifyStandardOpenOption.WRITE / CREATE.

  • write data to buffer usingByteBuffer.put() - advances the current position.

  • reset to start of buffer viarewind()before calling write().

  • alternativelly call flip() instead of rewind() - sets value of current position to 0 and limit to previous current position.

for(int h=0; h<3; h++) {	// Write some bytes to the buffer.  for(int i=0; i<26; i++)    mBuf.put((byte)('A' + i));  mBuf.rewind();			// Rewind the buffer so that it can be written.  fChan.write(mBuf);		// Write the buffer to the output file.

NIO Automatic file IOvia mapped buffer

Read from a file mapped buffer
  • Cast retval ofFiles.newByteChannel()toFileChannel.

  • map channel to a buffer:FileChannel.map() - returnsMappedByteBuffer - extendsByteBuffer. Params:

    • MapMode enum - values: READ_ONLY / READ_WRITE / PRIVATE(makeprivate copy of file - changes to buffer do not affect underlying file).

    • pos - location within file to begin mapping

    • size - number of bytes to map

  • read file from that buffer.

try (FileChannel fChan = (FileChannel)Files.newByteChannel(Paths.get("x.txt"))){

long fsize = fChan.size();

MappedByteBuffer mbuf = fChan.map(FileChannel.MapMode.READ_ONLY, 0, fsize);

for (int i=0; i< fsize; i++ ){ System.out.print((char) mbuf.get());}

Write to a file mapped buffer
  • data written to buffer will automatically be written to the file. No explicit write operation is necessary.

  • specifyMapMode.READ_WRITE


File IO

File Byte Streams

  • FileInputStream-ctor takes a path string or a File.opened for reading. to read a single byte, an array of bytes, and a subrange of an array of bytes. useavailable()to determine the number of bytes remaining andskip() to skip over unwanted bytes.

  • FileOutputStream -write bytes to a file - will create a non-existent file before opening it

byte[] buf = s.getBytes();

try(FileOutputStream f = new FileOutputStream("blah.txt")){

f.write(buf, 1, buf.length-2);

ctor takes file name

Members:

  • close() - When done with a file, must close it

  • read() - reads a single byte, returns –1 on EOF

  • write() - To write to a file

open a file Path by calling Files.newInputStream() / newOutputStream()


Files

Fileclass
  • Does notoperate on streams, deals directly with file properties and the file system – getpermissions, datetime,get/set path

  • NIOPath interface and Files class – a better alternative

    File f1 = new File("/blah/blah");
  • query methods: getName(), getPath(), getAbsolutePath(),getParent(),exists() , canWrite() , canRead() , isDirectory() , isFile() , isAbsolute() , lastModified(), length() 
  • 2 useful utility methods: renameTo(),delete()

  • other methods: deleteOnExit(), getFreeSpace(), getTotalSPace(), getUsableSpace(), isHidden(), setLastModified(), setReadOnly()

  • implements Comparable -->compareTo()

  • toPath() - conversionto java.nio.file.Path

  • list() - fordirectoryfiles, lists children file names. Optional param FilenameFilterinterface- limit the number of files returned – accept()is called once for each file in a list.

  • listFiles() - return the file list as an array of File objects instead of strings.

  • mkdir() , mkdirs()

The Files Class
  • providesstatic methods to act upon aPath. open or create a file that has the specified path.

  • Methods: copy(), createDirectory(), createFile(), delete(), exists(), isDirectory(), isExecutable(), isHidden(), isReadable(), isRegularFile(), isWritable(), move(), newByteChannel(), newDirectoryStream(), newInputStream(), newOutputStream(), notExists(), readAttributes(), size()

  • take an argument of typeOpenOptioninterface - describes how to open a file. It is implemented by theStandardOpenOption class - defines an enum with values: APPEND, CREATE, CREATE_NEW, DELETE_ON_CLOSE, DSYNC, READ, SPARSE, SYNC, TRUNCATE_EXISTING, WRITE

Files.copy(Paths.get("x.txt"), Paths.get("y.txt"), StandardCopyOption.REPLACE_EXISTING);


Paths

Path Interface

describes a file’s location. in java.nio.filepackage,extends interfaces: Watchable,Iterable<Path>,Comparable<Path>. convert a File instance into a Path instance by calling toPath().Methods:

  • getName(index) - obtain an element in a path.

  • GetNameCount() -get number of elements in a path

  • toString() a string representation of the entire path

  • resolve() - relative path into an absolute path

  • endsWith(), getFileName(), getName(), getNameCount(), getParent(), getRoot(), isAbsolute(), resolve(), startsWith(), toAbsolutePath(), toString()


The Paths Class

get() - obtain a Path

static Path get(String pathname, String … parts)

static Path get(URI uri)


The File Attribute Interfaces

Associated with a file is a set of attributes - hierarchy of interfaces defined in java.nio.file.attribute.


  • BasicFileAttributes – methods:creationTime(), fileKey(), isDirectory(), isOther(), isRegularFile(), isSymbolicLink(), lastAccessTime(), lastModifiedTime(), size()

  • From BasicFileAttributes two interfaces are derived: DosFileAttributes and PosixFileAttributes.

  • DosFileAttributes– methods: isArchive(), isHidden(), isReadOnly(), isSystem()

  • PosixFileAttributes (POSIX stands for Portable Operating System Interface.) - methods: group(), owner(), permissions()

to access a file’s attributes - call Files.readAttributes()static method orgetFileAttributeView():interfacesAttributeView,BasicFileAttributeView,DosFileAttributeView, and PosixFileAttributeView.


FileSystems

The FileSystem, FileSystems, and FileStore Classes

by using the newFileSystem() method defined by FileSystems, it is even possible to obtain a new file system. The FileStore class encapsulates the file storage system.

WatchableInterface

- an object that can be monitored for changes.


Using NIO for Stream-Based FileI/O

NIO.2 - symbolic links, directory tree traversal, file metadata.

  • Query a Path – Path methods:getName(),getParent(),toAbsolutePath().

  • Query a File - Filesmethods:isExecutable(),isHidden(),isReadable(),isWritable(),exists(), readAttributes() - eg getBasicFileAttributes, PosixFileAttributes.

  • List the Contents of a Directory - DirectoryStream<Path> implements Iterable<Path> -create usingnewDirectoryStream(Path),thenuseitsiterator() method

  • List a Directory Tree - UseFiles.walkFileTree(Pathroot, FileVisitor<? extends Path> fv)

FileVisitorinterface

  • defines how the directory tree is traversed – methods: postVisitDirectory, preVisitDirectory, visitFile, visitFileFailed

  • each method returns aFileVisitResultenum values: CONTINUE, SKIP_SIBLINGS,SKIP_SUBTREE, TERMINATE

  • to continue traversing the directory and subdirectories, a method should return CONTINUE. For preVisitDirectory(), return SKIP_SIBLINGS to bypass the directory and its siblings and preventpostVisitDirectory() from being called. To bypass just the directory and subdirectories, return SKIP_SUBTREE. To stop the directory traversal, return TERMINATE.

  • It is possible to watch a directory for changes by using java.nio.file.WatchService.

Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

FakeItEasy and EntityFramework

Originally posted on: http://geekswithblogs.net/Aligned/archive/2015/11/20/fakeiteasy-and-entityframework.aspx

We needed to fake or mock out Entity Framework so that we could test our “service layer” that holds our business logic without hitting a real database. We are using EF as our Repository and skipping all the extra work in creating a repository code layer that only wraps EF. We are ok with being this closely tied to EF.

It was difficult to figure out how to fake the context with an interface we made ourselves. We found some helpful Nuget packages, so I decided to share it.

EntityFramework.Testing:

EntityFramework Testing

EntityFramework.Testing.FakeItEasy:

EntityFrameworkTesting.FakeItEasy NuGet package

Looking at the project site for EntityFramework.Testing, they have some sample code for FakeItEasy as well as other mocking frameworks.  Do you guys think these packages would replace the extra classes that we added to my test class?  Would they work in scenarios like what was done for Premier?

EntityFramework.Testing Project site

EntityFramework.Testing.FakeItEasy provides a helpful extension method to mock EntityFramework's DbSets using FakeItEasy.

This also supports Moq and other libraries.

Sample code using it:

using System.Collections.Generic;using System.Data.Entity;using System.Data.Entity.Infrastructure;using System.Linq;using Acme.Data;using Acme.Models;using FakeItEasy;using Microsoft.VisualStudio.TestTools.UnitTesting;namespace Acme.Services.Tests
{
    [TestClass]publicclass ProductServiceTests
    {
        [TestMethod]
        [TestCategory("Product Service Get")]publicvoid Get_All_ReturnsExpected()
        {// Create test data
            var testData = new List<Product>
            {new Product {ProductId = 1, Name = "Product 1", Description = "this is a description", Active = true},new Product {ProductId = 2, Name = "Product 2", Description = "this is a description", Active = true},new Product {ProductId = 3, Name = "Product 3", Description = "this is a description", Active = false},new Product {ProductId = 4, Name = "Product 4", Description = "this is a description", Active = true},
            };// Arrange
            var set = A.Fake<DbSet<Product>>(o => o.Implements(typeof(IQueryable<Product>)).Implements(typeof(IDbAsyncEnumerable<Product>)))
                        .SetupData(testData);

            var context = A.Fake<AcmeContext>();
            A.CallTo(() => context.Products).Returns(set);

            var productService = new ProductService(context);// Act
            var products = productService.GetAll().ToList();// Assert
            Assert.AreEqual(4, products.Count(), "Should have 4");
            Assert.AreEqual(1, products.First().ProductId, "Should be 1");
        }

        [TestMethod]
        [TestCategory("Product Service Get")]publicvoid Get_Active_OnlyReturnsActive()
        {// Create test data
            var testData = new List<Product>
            {new Product {ProductId = 1, Name = "Product 1", Description = "this is a description", Active = true},new Product {ProductId = 2, Name = "Product 2", Description = "this is a description", Active = true},new Product {ProductId = 3, Name = "Product 3", Description = "this is a description", Active = false},new Product {ProductId = 4, Name = "Product 4", Description = "this is a description", Active = true},
            };// Arrange
            var set = A.Fake<DbSet<Product>>(o => o.Implements(typeof(IQueryable<Product>)).Implements(typeof(IDbAsyncEnumerable<Product>)))
                        .SetupData(testData);

            var context = A.Fake<AcmeContext>();
            A.CallTo(() => context.Products).Returns(set);

            var productService = new ProductService(context);// Act
            var products = productService.GetActiveProducts().ToList();// Assert
            Assert.AreEqual(3, products.Count(), "Should have 3");
            Assert.AreEqual(4, products.Last().ProductId, "Should be 4");
            Assert.IsFalse(products.Any(x =>x.Active == false), "None should be active");
        }
    }
}

Here’s an alternative a co-worker had created before the NuGet package was discovered.

The test:

[TestMethod]
[TestCategory("Product Service Get")]publicvoid Get_All_ReturnsExpected()
{// Arrange
    var contextFaker = new ContextFaker();
    contextFaker.Products.AddRange(new List<Product>
    {new Product { ProductId = 1, Name="Product 1", Description = "this is a description", Active = true},new Product { ProductId = 2, Name="Product 2", Description = "this is a description", Active = true},new Product { ProductId = 3, Name="Product 3", Description = "this is a description", Active = false},new Product { ProductId = 4, Name="Product 4", Description = "this is a description", Active = true},
    }));
    var productService = new ProductService(contextFaker.FakeContext);// Act
    var products = productService.GetAll().ToList();// Assert
    Assert.AreEqual(4, products.Count(), "Should have 4");
    Assert.AreEqual(1, products.First().ProductId, "Should be 1");
}
The helper code: 
publicclass ContextFaker
{public List<Product> Products = new List<Product>(); private IAcmeContext _fakeContext;public IAcmeContext FakeContext
    {
        get
        {
            A.CallTo(() => _fakeContext.Products).Returns(ListFaker<Product>.GetFake(Products));return _fakeContext;
        }
        set { _fakeContext = value; }
    }public ContextFaker()
    {
        _fakeContext = A.Fake<IAcmeContext>();
    }
}publicstaticclass ListFaker<T> where T : class
{publicstatic DbSet<T> GetFake(List<T> data)
    {
        var dataAsQueryable = data.AsQueryable();
        var fakeDbSet = A.Fake<DbSet<T>>(b => b.Implements(typeof(IQueryable<T>)));
        A.CallTo(() => ((IQueryable<T>)fakeDbSet).GetEnumerator()).Returns(dataAsQueryable.GetEnumerator());
        A.CallTo(() => ((IQueryable<T>)fakeDbSet).Provider).Returns(dataAsQueryable.Provider);
        A.CallTo(() => ((IQueryable<T>)fakeDbSet).Expression).Returns(dataAsQueryable.Expression);
        A.CallTo(() => ((IQueryable<T>)fakeDbSet).ElementType).Returns(dataAsQueryable.ElementType);return fakeDbSet;
    }
}
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Coexistence between Exchange forests (without trusts…) -- Part 10: Configuring Free/Busy

Originally posted on: http://geekswithblogs.net/marcde/archive/2015/11/23/coexistence-between-exchange-forests-without-trustshellip----part-10-configuring.aspx

? Note: In order for Free/Busy to work Outlook Anywhere needs to be enabled in both forest and auto discover needs to be functioning properly. Additionally, the external URLs for EWS need to be configured.

? Note: A service account in each forest is required for authentication purposes. This account should not have a mailbox and the minimum amounts of rights possible. In the example configuration it has been configured as “\svc_fb”.

Step 1: Open the Exchange management shell

? Note: Step 2 relates to the target forest. This is the forest you are pulling the information to.

Step 2: Run “Set-AvailabilityConfig -OrgWideAccount ‘\svc_fb’

Image may be NSFW.
Clik here to view.
image

Step 3: Run “$a = Get-Credential (Enter the credentials for organization-wide user the domain you want to get Free/Busy from)”
Image may be NSFW.
Clik here to view.
image

Step 4: Run “Add-AvailabilityAddressspace -Forestname Contoso.com -Accessmethod OrgWideFB -Credential:$a
Image may be NSFW.
Clik here to view.
image

 

Image may be NSFW.
Clik here to view.
ArrowGreen
Coexistence between Exchange forests (without trusts…)  -- Part 9: Synchronization!
Image may be NSFW.
Clik here to view.
ArrowGreen
Coexistence between Exchange forests (without trusts…)  -- Part 11: References
Image may be NSFW.
Clik here to view.

(JS) Regular Expression to replace "require" statements with "import statements"

Originally posted on: http://geekswithblogs.net/AngelEyes/archive/2015/11/23/js-regular-expression-to-replace-require-statements-with-import-statements.aspx

Having dabbled a bit with React / Redux, and while doing so used both the Import statement from ES6 and the Require function of Browserify, I decided I needed a RegEx to replace all "require" lines to "import" lines.
This is what I came up wit, so far:

const\s+(\w+)\s+=\s+require\('([^']+)'\);?

and replace  with;

import $1 from '$2';
Image may be NSFW.
Clik here to view.
Viewing all 6441 articles
Browse latest View live