Quantcast
Channel: Geekswithblogs.net
Viewing all 6441 articles
Browse latest View live

Why I drop my kindle in dustbin & move back to physical books for reading.

$
0
0

Originally posted on: http://geekswithblogs.net/anirugu/archive/2015/10/24/why-i-drop-my-kindle-in-dustbin-amp-move-back.aspx

The first news came in last days was  Waterstones to stop selling Kindle in most stores . Then this news came up

Drop That Kindle! 10 Reasons Print Books Are Better Than E-Books

Later on , On Internet many people are show their opinion about kindle & physical books.

So Let’s talk about Kindle. Amazon made their product kindle for reading, reading without make eyestrain.

 

1. Worst PDF support “":-

Few months ago I contact my university and they send me bunch of PDF files. I seriously want to read them but kindle doesn’t help me anything. Kindle show “print” as prtr. Many character are missing from a single words.  Sentence can’t be read without brainstorming. At that time I finally end up using Google play books which help me to read PDF.

 

2. No colors :-

I recently tried the comics in Amazon through Kindle Unlimited. The problem is actually the colors can’t be seen in grayscale based e-ink kindle. I read comics but kindle has no good software support for Comics. If I zoom and move to next page I need to set the same again for next page. I used a Android app which save my settings and make same position when I move to another page. The colors can be seen in android device but not in Kindle

 

3.  Kindle Unlimited is Actually Kindle Limited :-

When I open Amazon India site and go to  Kindle unlimited , All I see is nudity , pornography and matter that is totally useless or just related to adult. No matter what kind of books you are reading they willi just show you porn. So Who men want his family to use his kindle and see this crap every ime visit the site.

 

They don’t even have a single books from top 200 popular books selected by goodreads (Goodreads is also part of Amazon). Kindle unlimited is actually pay for thing that no body care.

 

In last day I read a book and trying to remove after reading but It’s can’t be done, I just check what happen to the books :- The book is now removed from Kindle unlimited and show for buy option at the price. So there is no author who really want to share his hard-work for kindle unlimited. Maybe amazon not pay them well so they don’t like it.

 

4. Bugs & missing features.

 

If you write a mail to Amazon , Jeff (CEO) or any other department they will respond you “Your idea are valuable to us, We can definitely understand what kind of problem you feel. ”.

 

See some psycho of Amazon (KU stands for kindle unlimited).

1. You can’t return KU books from Kindle By Context Menu (similar to right click) in Kindle device.

2. No option in read.amazon.com either to return books.

3. No option to show how much I have read in Mange Kindle content and device section.

4. You can’t buy books from kindle apps on android. You need to go to browser and search. I didn’t get it what is the reason behind this joke.

5. If you make a search and buy something in kindle store. Next time when you make search there is no option to exclude my purchased books and borrowed books.

6. Device’s screensaver can’t be set. Last time I jailbroke my Kindle 4 and it’s work better. Duokan is work better than kindle’s own firmware. They don’t update things, not make features, just ship crap software from last thing. Just release a font and all media write “Amazon Blah Blah”.  These paid agent in Media write it like amazon release something Like a OS.

Ok, it’s just a E-reader but what exactly use of Goodreads. My kindle work faster then kindle paperwhite. All I can blame is goodreads which  I never use in Kindle. Same as Browser which is another useless piece. They add browser to just show a feature like “See, Kindle has browser”. Who even can’t load a single HTML Page.

7.

Writing a mail to Customer support is like talking with a tape recorder. When I told support that I want to read my PDF they told me to convert. Nor Send to kindle nor calibre works. I tried to read books and all diagram in books is just damaged and now it’s can’t be easily understand that what is this diagram about. So kindle support is not helpful when you got in a problem.They are just tape-recorder and got paid for behave like a tape-recorder.

Kindle don’t have flexibility in any kind ,

Storage is limited. You can’t extend it or change it.

Font is limited, You can’t just put your TTF file (Like duokan does in kindle).

When you pay for books, You don’t hold any right ( I write about this in clearly in point 8)

The device screensaver and other thing can’t be modified like a android phone. When I use my android phone, I just use Flipboard and every kind of article read very easily and interface is so good of flipboard. Same way Kindle doesn’t allow me to put any of android app nor they have their own eco-system. They don’t even support Epub and other format.

 

8.  When you pay for books in Kindle you don’t have any right. In last day I read on news.ycombinator.com and one men talking about that kindle is closing their account because suspicious account by someone after his account hacked. He lost 4 digit amount in $ books. He told same thing that it’s totally lose. If you bought a books and suddenly delete it , you can’t recover it. Amazon support will tell you “We don’t have it”.

So Most of talk end up like.

1. “Does amazon have any option so I can remove the books that I already read in amazon when I search”,  “We don’t have it.”

2. “Do you have something in Hindi,” “You can look at this page”. The page is itself a jokes that he doesn’t contain any good books or just few books. Ok, it’s not Amazon’s fault. I can’t blame it.

9. If you broke your kindle you can’t get it fixed once your kindle goes out of warranty. There is no service center either which fix the device for you at any price.

So if you broke a Chinese phone or any other device, 99.99% chances are you can get it fixed in any shop of a popular market. but when kindle is broken, Amazon make you feel bad. a broken device eat mind of person and feel bad when he see his money is lost in broken device.

10. Forget the backlight for a seconds, If you really into the book, You can read it in even in light. or another way if you read on kindle at 3 AM in morning your eyes still get pressure on eyes.

In last days, My kindle broke & I feel this is a good thing happen to me. If something bad happen to you early then it’s good because you got time to learn from it. So I finally put my kindle in dust-bin and move on to physical book.

 

Let’s compare.

Backlight :- Who need the backlight ? Are you lazy enough to read even a glass of water, There is a proper light you can manage in your room when you read. You still have many option to read without disturbing other.

Pixels :- Pixels in paperwhite is just a useless kind of feature. You can’t read a comic as colored. Even I tried to use read.amazon.com to see color content, it’s totally crap. I tried some magazine from USA and they have some kind of quality but that’s not our domestic hero.

Sharing :- Kindle have either not have a feature and if have a feature it’s not work in every country. If it’s work in Every country it’s useless. You need to depend on amazon everytime for the thing pay for.

Now I can share my books with everyone without any trouble.

Kindle unlimited cost me 199INR and give me totally shit for reading. There is many library in this country which give you lifetime validity membership. If you have pay them 1000 you can borrow any books under 1000. This membership only expire when you die. So It’s not limit myself to a holy crap that amazon do in Kindle unlimited.

Same way, in last 12 months I never bought a paperback in amazon.in I simply go to store and read thing to check and buy books there. You can say to me You can better buy same books from amazon. But wait, When I see dozen of person work in the shop I thing I pay for good. At-least it’s feed some people who sit for a day in book shop and just help other to let them choose their book. I am not sure How much amazon pay to seller and other people in their retail chain.

So My kindle is broken just not means I am ready to go to a new war, I just simply better use physical books and read them without any digital mumbo-jumbo. My books will not damage like kindle does And buying a books from store still give me good feeling. When I go to store I charged fully but I see people how they choose books and there are thing to see  , learn , listen. Amazon eat them all by giving a piece of crap device in the hand of customer.

So Thanks for reading my post, When you pay to amazon for a eBook then a person who use pirated book have more right then you, You have DRM and you can’t put it another place without amazon login. The men who use pirated copy have more right then you. He can share their books without anyone.

 

When you pay for a software, The person who use pirated material have same thing as you but in E-books You are locked in DRM and men who use pirated are free to share everything with anyone at anytime.

 

I can write 100 other thing in my post but I need to stop now. I am happy to have my paperback books, I can feel the text, see the colors and it’s feel better then senseless eBooks. 

 

Thanks you again for reading my post about horrible kindle.  Happy reading Smile


10 Features in Team Foundation Server that you maybe didn’t know about

$
0
0

Originally posted on: http://geekswithblogs.net/jakob/archive/2015/10/25/10-features-in-team-foundation-server-that-you-maybe-didnrsquot.aspx

I often talk to different teams about how they work with Team Foundation Server or Visual Studio Online. I get a lot of questions about different features, and some of them tend to come up more often than other. So here is a list of 10 features in TFS that I get questions about regularly or that I have noticed a lot of teams don’t know about. It is by no means exhaustive, and it is a mixture of smaller and larger features, but hopefully you will find something here that you didn’t know about before.

 

1. Associate Work Items in Git commits from any client

Associating work items to your changesets in TFS have always been one of the more powerful features. Not in itself, but in the traceability that this gives us.
Without this, we would have to rely on check-in comments from the developers to understand the reason for a particular change, not always easy when you look at changes that were done a few years back!

When using Git repos in TFS, and you use the Git integration in Visual Studio you have the same functionality that lets you associate a work item to a commit, either by using a work item query or by specifying a work item ID.

image

But a lot of developers like to use other Git tooling, such as Git Extensions, SourceTree or the command line. And then we have the teams that work on other platforms, perhaps developing iOS apps but store their source code in a Git repo in TFS. They obviously can’t use Visual Studio for committing their changes to TFS.

To associate a commit with a work item in these scenarios, you can simply enter the ID of the work item with a # in front of it as part of the commit message:

git commit –a –m “Optimized query processing #4321”

Here we associate this commit with the work item with ID 4321. Note that since Git commits are local, the association won’t actually be done  until the commit has been pushed to the server. TFS processes git commits for work item association asynchronously, so it could potentially take a short moment before the association is done.

 

2. Batch update work items from the Web Access

Excel has always been a great tool for updating multiple work items in TFS. But over the years the web access in TFS has become so much better so that most people do all their work related to work item management there. And unfortunately, it is not possible to export work item queries to Excel as easily from the web as it is from Visual Studio.

But, we can update multiple work items in TFS web access as well. These features have gradually been added to Visual Studio Online and is now part of TFS 2015 as well.

From the product backlog view, we can select multiple work items and the perform different operations on them, as shown below:

image

 
There are a few shortcut operations for very common tasks such as moving multiple backlog items to a iteration, or assigning them to a team member.
But then you have the Edit selected work item(s) that allows us to set any fields to a new value in one operation.

image

You can also do this from the sprint backlog and from the result list of a work item query.

 

3. Edit and commit source files in your web browser

Working with source code is of course done best from a development IDE such as Visual Studio. But sometimes it can be very convenient to change a file straight from the web access, and you can! Maybe you find a configuration error when logged on a staging environment where you do not have access to Visual Studio. Then you can just browse to the file in source control and change it using the Edit button:

image

 

After making the changes, you can add a meaning commit message and commit the change back to TFS. When using Git, you can even commit it to a new branch.

image

 

4. Bring in your stakeholders to the team using the free Stakeholder license

Normally, every team member that accesses TFS and reads or writes information to it (source code, work items etc) need to have a Client Access License (CAL). For the development team this is usually not a problem, often they already have a Visual Studio with MSDN subscription in which a TFS CAL is included. What often cause some friction is when the team try to involve the stakeholders. Stakeholders often include people who just want to track progress or file a bug or a suggestion occasionally. Buying a CAl for every person in this role usually ends up being way to expensive and not really worth it.

In TFS 2013 Update 4 this was changed, from that point people with a stakeholder license does not a CAL at all, they can access TFS for free. Buth they still have a limited experience, they can’t do everything that a normal team member can. Features that a stakeholder can use include:

  • Full read/write/create on all work items
  • Create, run and save (to “My Queries”) work item queries
  • View project and team home pages
  • Access to the backlog, including add and update (but no ability to reprioritize the work)
  • Ability to receive work item alerts

 

image

 

To learn more about the Stakeholder license, see  https://msdn.microsoft.com/Library/vs/alm/work/connect/work-as-a-stakeholder

 

5. Protect your Git branches using branch policies

When using Team Foundation Version Control (TFVC) we have from the first version of TFS had the ability to use check-in policies for enforcing standards and policies of everything that is checked in to source control. We also have the ability to use Gated Builds, which allows us to make sure that a changeset is not checked in unless an associated build definition is executed successfully.

When Git was added to TFS back in 2013, there was no corresponding functionality available. But in TFS 2015 now, the team has added branch policies as a way to protect our branches from inadvertent or low quality commits. In the version control tab of the settings administration page you can select a branch from a Git repo and then apply branch policies. The image below shows the available branch policies.

image

 

Here we have enabled all three policies, which will enforce the following:

  • All commits in the master branch must be made using a pull request
  • The commits in the pull request must have associated work items
  • The pull request must be reviewed by at least two separate reviewers 
  • The QBox.CI build must complete successfully before the pull request can be merged to the master branch

 

I really recommend that you start using these branch policies, they are an excellent way to enforce the standard and quality of the commits being made, and can help your team improve their process and help move towards being able to deliver value to your customers more frequently.

 

6. Using the @CurrentIteration in Work Item Queries

Work Item Queries are very useful for retrieving the status of your ongoing projects. The backlogs and boards are great in TFS for managing the sprints and requirements, but the ability to query on information across one ore more projects are pivotal. Work item queries are often used as reports and we can also create charts from them.

Very often, we are interested in information in the current sprint, for example how many open bug are there, how many requirements do we have that doesn’t have associated test cases and so on. Before TFS 2015, we had to write work item queries that referenced the current sprint directly, like so:

image

The problem with this was of course that as soon as the sprint ended and the next one started, we hade to update all these queries to reference the new iteration. Some people came up with smart work arounds, but it was still cumbersome.

 

Enter the @CurrentIteration token. This token will evaluate to the currently sprint which means we can define our queries once and they will continue to work for all upcoming sprints as well.

 

image

this token is unfortunately not yet available in Excel since it is not team-aware. Since iterations are configured per team, it is necessary to evaluate this token in the context of a team. Both the web access and Visual Studio have this context, but the Excel integration does not, yet.

Learn more about querying using this token at https://msdn.microsoft.com/en-us/Library/vs/alm/Work/track/query-by-date-or-current-iteration

 

7. Pin Important Information to the home page

The new homepage has been available since TFS 2012, and I still find that most teams does not use the possibility to pin important information to the homepage enough.
The homepage is perfect to show on a big screen in your team room, at least if you show relevant information on it.

We can pin the following items to the home page:

  • Work Item Queries
    The tile will show the number of work items returned by the query . Focus on pinning queries where the these numbers are important and can trigger some activity. E.g. not the total number of backlog items, but for example the number of active bugs.
  • Build Definition
    This tile shows a bar graph with the history of the last 30 builds. Gives a good visualization of how stable the builds are, if you see that builds fails every now and then you have a problem that needs to be investigated.
  • Source control
    Shows the number of recent commits or changesets. Will let you know how much activity that is going on in the different repos
  • Charts
    Charts can be created from work item queries and can also be pinned to the home page. Very useful for quickly give an overview of the status for a particular area

 

Here is an example where we have added a few items of each type

image

 

8. Query on Work Item Tags

Support for tagging was first implemented back in TFS 2012 Update 2. This allowed us to add multiple tags to work items and then filter backlogs and query results on these tags.
There was however a big thing missing and that was the ability to search for work items using tags.

This has been fixed since TFS 2013 Update 2, which means we can now create queries like this.

image

It is also possible to work with tags using Excel, this was another big thing missing from the start.

Unfortunately it is not yet possible to setup alerts on tags, you can vote on this feature on UserVoice here: http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/6059328-use-tags-in-alerts

 

9. Select how you want to handle bugs

One of the most common questions I get when talking to teams that use TFS is how they should handle bugs. Some teams want to have the bugs on the backlog and treat them like requirements. Other teams want to treat them more like tasks, that is adding them to the corresponding user story or backlog item and use the task board to keep track of the bug.

The good think is that you can configure this per team now. On the team settings page, there is a section that lets you configure the behavior of bugs.

image

 

To learn more about controlling the behavior of bugs, see https://msdn.microsoft.com/Library/vs/alm/work/customize/show-bugs-on-backlog

 

10. Integrate with external or internal services using Service Hooks

Extensibility and integration are very important to Microsoft these days, and this is very clear when looking at the investments for TFS 2015 that included a bunch of work in this area. First of all Microsoft has added a proper REST API for accessing and updating most of the available artifacts in TFS, such as work items and builds. It uses OAuth 2.0 for authentication, which means it is based on open modern web standards and can be used from any client on any platform.

In addition to this, TFS 2015 also support Service Hooks.  A service hook is basically a web endpoint that can be called when something happens, in this case in TFS. So for example, when a backlog item is created in TFS we might want to also create a card in Trello. Or when a new change is committed into source control, we might want to kick off a Jenkins build.

Here is a list of the services that are supported out of the box in TFS 2015:

image

And the list keeps growing, in Visual Studio Online there are already 7 more services offered, including AppVeyor, Bamboo and MyGet.

Note that the list contains one entry called Web Hooks. This is a general service configuration in which you can configure a HTTP POST endpoint that will receive messages for the events that you configure. The messages can be sent using JSON, MarkDown, HTML or text. This mean that you can also integrate with internal services, if they expose HTTP REST endpoints.

 

To learn more about service hooks, see https://www.visualstudio.com/en-us/integrate/get-started/service-hooks/create-subscription

Friend of Friend recommendations Neo4j and SQL Sever

$
0
0

Originally posted on: http://geekswithblogs.net/brendonpage/archive/2015/10/26/friend-of-friend-recommendations-with-neo4j.aspx

I’m going to be doing a Neo4j workshop up in JHB in November 2015 and thought I’d give an example of something that is easy to do in a graph database but challenging to do in a relational database. Before we begin, Neo4j is a graph database, a graph database is a database that uses a graph model to store data, graph databases fall under the broader category of NoSQL databases.

The problem

I have a social network and want to recommend possible friends to my users.

In Neo4j I’ll be storing the data using the following structure:

7

In SQL server I’ll be using:

6

The social network

I’ve setup the following social network in both Neo4j and SQL server:

1

I’ve arranged it so that it is easy to see who should be recommended for Brendon. I would want my query to recommend to Louise for Brendon because she is a friend of 2 of Brendon’s friends, where as Alice is only a friend of one of Brendon’s friends. I would also expect the query to exclude Bob because Brendon is already friends with Bob:

clip_image002

First Attempt

So I started off by writing the Cypher to get friend recommendations for Brendon (Cypher is the query language used by Neo4j):

MATCH 
    (me:Person)-[:FRIEND]->(myFriend:Person)-[:FRIEND]->(friendOfFriend:Person)
WHERE NOT
    (me)-[:FRIEND]->(friendOfFriend:Person)
    AND me.name = 'Brendon'
RETURN
    count(friendOfFriend) as friendsInCommon, friendOfFriend.name as suggestedFriend
ORDER BY
    friendsInCommon DESC;

Which returns:

image

Then I wrote SQL that would do the same:

SELECT
    Me.Id                      AS MeId,
    FriendOfFriend.FriendId    AS SuggestedFriendId,
    COUNT(*)                   AS FriendsInCommon
FROM
    People         AS Me
INNER JOIN
    FriendMaps    AS MyFriends
      ON MyFriends.MeId = Me.Id
INNER JOIN
    FriendMaps    AS FriendOfFriend
      ON MyFriends.FriendId = FriendOfFriend.MeId
LEFT JOIN
    FriendMaps    AS FriendsWithMe
      ON  Me.Id = FriendsWithMe.MeId
      AND FriendOfFriend.FriendId = FriendsWithMe.FriendId
WHERE
    FriendsWithMe.MeId IS NULL
    AND Me.Name = 'Brendon'
GROUP BY
    Me.Id,
    FriendOfFriend.FriendId
ORDER BY
    FriendsInCommon DESC

Which returns:

image

The first thing you’ll notice is that for my SQL results I’ve only returned Ids, no names, this is because to return names I either have to add another join (back to the People table) or do a separate query, both are additional overhead. Whereas in Cypher I have access to both the Id and Name and chose to return only the name. This isn’t to much of a big deal, but it is the first hint that Neo4j is more suited to this problem.

The second thing you might notice is that the Cypher is shorter, and if you are familiar with both languages the Cypher is certainly easier to read. Again this is a small hint towards Neo4j being more suited.

Road Block

My queries work great for Brendon, but if I try use them to get friend recommendations for Louise I get no results! Why is this? Well If we re-arrange the social network so that it is easy to see who should be recommended to Louise you will notice that the direction of the friend relationships are no uniformly pointing away from our subject and towards their friends and their friends friends. We now have relationship pointing in both directions:

image

Ignoring Relationship Direction

To solve this let’s ignore the direction of the FRIEND relationship. Here is the updated Cypher query which recommends friends for Louise and ignores the direction of the FRIEND relationships:

MATCH 
    (me:Person)-[:FRIEND]-(myFriend:Person)-[:FRIEND]-(friendOfFriend:Person)
WHERE NOT
    (me)-[:FRIEND]-(friendOfFriend:Person)
    AND me.name = 'Louise'
RETURN
    count(friendOfFriend) as friendsInCommon, friendOfFriend.name as suggestedFriend
ORDER BY
    friendsInCommon DESC;

Which returns:

image

Yay it works! You will notice that all I had to do was to remove the arrows from the relationship definitions in the query. So where ever I had "-[:FRIEND]->" I now have "-[:FRIEND]-".

I started updating the SQL query to do the same thing but gave up after 30 minutes of unsuccessfully trying to figure it out. Granted I’m not a SQL guru, but I have been using it for most of my career and have solved a lot of interesting problems with it.

Some might argue that my data is incomplete, that I should’ve added friend relationships in both directions, which would make the original queries work. But that isn’t the point, the point is that it is difficult to ignore relationship directions in SQL, and putting data in for the sake of a query is going to cause other issue for us. For example, what if the direction of the relationships had meaning? As in it indicated who added who as a friend, and if there are FRIEND relationships in both directions then that indicates that the other has accepted the friend request. If I’d blindly added relationships in both directions so that my original recommendation queries worked then I wouldn’t be able to do any of that.

Conclusion

Graph databases are good at doing what I like to call ad-hoc relationship queries and Cypher makes it easier to express, read and reason about relationships. Relational databases are more rigid in their relationship querying capabilities because relationships aren’t first class citizens and have to be modelled using table structures.

One thing that I have not touched on but I feel is worth a mention is performance. Neo4j is going to have a linear increase in query execution time as the social network grows in size and complexity where as the SQL server queries be impacted severely as the social network grows in complexity, the more each person is connected the bigger the results of those joins are going to be!

Coexistence between Exchange forests (without trusts…) -- Part 3: Preparing the UK Exchange 2007 environment

$
0
0

Originally posted on: http://geekswithblogs.net/marcde/archive/2015/10/19/coexistence-between-exchange-forests-without-trustshellip-----part-3.aspx

Create a scoped send connector

Step 1: Open the exchange management console

Step 2: Click on Hub Transport under Organization Configuration

Step 3: Click on the Send Connectors Tab

Step 4: In the Actions pane click on New Send Connector

Step 5: Enter a name for the send connector. From the drop down menu under Select the intended use for this Send connector select Internal.

Step 6: In the address space pane, click on Add.

Step 7: In the SMTP Address Space window, under Address, enter the domain name for the organization mail will be routed to. Enter a cost for the send connector if applicable.

Step 8: In the network setting page, select Route mail through the following smart hosts and click Add

Step 9: In the Add smart host window, enter the IP address of the HUB transport server in the organization you will be routing the mail to and click ok.

? Note: It is recommended to add multiple HUB servers in this field (if available) for redundancy.

Step 10: Click Next.

image

Step 11: Leave the authentication settings on their defaults. Click Next.

image

Step 12: In the Source server page, verify the defaults and click Next.

? Note: It is recommended to add multiple source servers here (if available) for redundancy.

image

Step 13: In the New Connector window click New.

image

Step 14: In the Completion window review the results and click Finish.

 

Create a scoped receive connector

Step 1: Open the exchange management console

Step 2: Click on Hub Transport under Server Configuration

Step 3: In the Actions pane click on New Receive Connector

Step 4: In the Introduction page, under Name, enter a name for the receive connector. From the drop down box under Select the intended use for this receive connector, select Internal. Click next.

image

Step 5: On the Remote Network Settings page, remove the default remote IP addresses and enter the IP address of the server(s) in the remote domain. Click Next.

image

Step 6: On the New Connector page, click New.

Step 7: On the Completion page, click Finish.

Step 8:Right click the newly created connector and select properties.

Step 9: Click the Permission Groups tab.
image

Step 10: Tick the box next to Anonymous Users on the permission groups tab and click Apply and OK.

image

? Note: Repeat this step for every server who will be receiving SMTP traffic from the other organization.

Add accepted domains to exchange

Uk.toasterlabs.org

Step 1: Open the exchange management console

Step 2: Click on Accepted Domains under Organization Configuration

Step 3: In the Actions pane click on New Accepted Domain

Step 4: On the New accepted domain page, enter a name and the accepted domain (for example: uk.contoso.com). Tick the radio button for Authoritative Domain.

Step 5: Click New.

 

toasterlabs.org

Step 1: Open the exchange management console

Step 2: Click on Accepted Domains under Organization Configuration

Step 3: In the Actions pane click on New Accepted Domain

Step 4: On the New accepted domain page, enter a name and the accepted domain (for example: contoso.com). Tick the radio button for Internal Relay.

Step 5: Click New.

 

1.1.1.4 Update address book policy

Step 1: Open the exchange management console

Step 2: Click on E-Mail Address Policies under Organization Configuration

Step 3: Right click the Default Policy and select Edit…

Step 4: On the Introduction page, click Next.

Step 5: On the Conditions page, click Next.

Step 6: On the E-mail addresses page, click Add.

Step 7: In the SMTP E-mail address window, under E-mail address local part, select the appropriate radio button. Select the radio button for Select accepted domain for email address and clock Browse.

Step 8: In the Select Accepted Domain window, select the appropriate domain and click OK.

Step 9: In the SMTP E-mail address window, click OK.

Step 10: On the E-mail addresses page, click Next.

Step 11: On the Schedule page, select the radio button for Immediately and click Next.

Step 12: On the Edit E-mail Address Policy page, click edit.

Step 13: Once the application of the policy has been completed, click Finish.

? Note: Repeat this process for each added accepted domain.

Update CAS URLs to reflect UK domain

Step 1: Open the exchange management shell

Step 2: Run the following code block to change the external URLs:

? Note: Change the FQDNs to match your environment!

? Note: This code will change the settings on all the CAS servers it finds in the Exchange organization! If necessary, adapt the filters to match a site.

Foreach($Server in Get-ClientAccessServer | Get-ExchangeServer | where-object {$_.AdminDisplayVersion.Major -lt 14}){

$CASserver = $Server.Identity

Get-AutodiscoverVirtualDirectory -Server $CASserver | Set-AutodiscoverVirtualDirectory –ExternalUrl "https://autodiscover.uk.contoso.com/Autodiscover/Autodiscover.xml"

Get-webservicesVirtualDirectory -Server $CASserver | Set-webservicesVirtualDirectory –ExternalUrl "https://mail.uk.contoso.com/Ews/Exchange.asmx"

Get-OabVirtualDirectory -Server $CASserver | Set-OabVirtualDirectory –ExternalUrl "https://mail.uk.contoso.com/Oab"

Get-OwaVirtualDirectory -Identity "$CASserver\OWA (Default Web site)" | Set-OwaVirtualDirectory –ExternalUrl "https://mail.uk.contoso.com/Owa"

Get-ActiveSyncVirtualDirectory -Server $CASserver | Set-ActiveSyncVirtualDirectory -ExternalUrl https://mail.uk.contoso.com/Microsoft-Server-ActiveSync

}

 

Request & install new certificate for the exchange 2007 environment to reflect these changes

Not documented

 

 

 

ArrowGreen

Coexistence between Exchange forests (without trusts…)  -- Part 2: DNS Forwarders

ArrowGreenCoexistence between Exchange forests (without trusts…)  -- Part 4: Preparing the US Exchange 2010 environment

Ethics Organization

$
0
0

Originally posted on: http://geekswithblogs.net/TimothyK/archive/2015/10/26/ethics-organization.aspx

So I was listening to a recent Western Developer’s podcast on Ethics.  At one point in the conversation [39:30] there was a call for establishing a industry standard professional body for software developers.  This would be much like the Bar Association for lawyers or the Medical Board for doctors.  It would be responsible for establishing a set of guidelines or rules for software development professionals to follow.  It would grant membership to those properly trained and punish those that act unprofessionally.  This is also a common theme from Uncle Bob [23:00].

Neither of these podcasts stated what that code of ethics should be or how we would go about forming this body.  It would certainly be difficult to do.  In my opinion this will never happen.  I hold this opinion not because I think it is too difficult, but because I think we shouldn’t.  I do believe we need to act professionally and we need to apply ethics to our jobs.  However, I don’t think we could successfully build a governance body and have this widely accepted by our clients.

We are a service industry.  We generally don’t serve individuals, like doctors serve patients.  We serve industries.  We write software for the health care, legal, engineering, or accounting industries.  We usually specialize in serving a single industry.  In my opinion software companies that specialize in a single industry are more successful at meeting their clients needs.

When specializing in servicing an industry you must become part of that industry.  You must learn to talk like you clients.  You must adopt their culture and behaviours.

 

For example let’s look at serving the health care industry.  Doctors are bound by their medical board to save lives.  That is their code of ethics.  Although hospital administrators are not doctors and not ruled by the same medical board, due to their service to the industry they do follow many of the same core ethics.  Hospital administrators work within fixed budgets to save as many lives as possible.  Their actions save lives, the same as doctors save lives.

A hospital administrator could print flyers reminding people to wash their hands and this can save lives.  The costs of this printing project could save more lives per dollar than keeping a surgical staff on call.  So when an administrator needs to make a decision to print flyers or fund a surgical team they can apply their ethics to help them make this decision.

Now for a software example, let’s say a hospital administrator wants you to build a big design up front, 3 year waterfall project.  You know because of your training and/or experience as a software development professional that this approach is likely to fail.  It is a waste of money.  A professional software developer would argue with the administrator that it is better to break the project down into smaller Agile projects.

When arguing with the hospital administrator as to the whether or not the waterfall project should be done, this argument must be framed in the context of the health care industry.  The software developer should refuse to do the waterfall project.  Not because their software professional organization states that waterfall is wrong.  The reason should be because the waterfall project will not save as many lives compared to spending that same hospital budget on other potential projects.

 

We cannot superimpose ethics of the software development profession (or any other industry) on our client’s industry.  If we cannot restate our ethics in the terminology of our client’s industry we will not be successful.

In the accounting industry there is a very strict audit trail.  Every penny received from the sale of an item can be traced through to how that is redistributed to employee salaries, building maintenance costs, raw materials, vendors, taxes, and shareholders.  An accountant would consider it unprofessional if they were not able to produce this audit trail.

However, other industries do not have as strict an audit trail.  The agriculture and food industry generally cannot produce an audit trail to trace a loaf of bread back to the farmer’s field where the wheat was grown on.  An accountant may look at that and claim it is unprofessional.  The farmer and baker would reject their arguments.  The accountant would not be able to argue the point purely on the professionalism of an audit trail.  However, they might be more successful if they could reframe their argument as a food safety issue.  Food safety is part of the ethics the farmer and baker are bound to.  The agriculture industry does batch together food from multiple farms and verify the quality and safety of the batch, but they might never be able to trace grains of wheat as accurately as accountants track pennies.

Furthermore, to the agricultural professional one should not waste food or other resources.  Tracking individual grains of wheat would certainly be seen by farmers and bakers as wasteful and therefore unprofessional.  The same qualities that may make someone a professional in one industry could seem unprofessional if unconditionally applied in other industries.

 

Although we can and should train software developers to be professional and ethical, we cannot directly superimpose those standards on the industries we serve.  Professional software developers must hold two degrees:  one in software development and another in the domain and ethics of the industry they serve.  If we cannot adapt our ethics to the terms of our clients (on a client by client basis) we risk being branded as unprofessional.  This could, or perhaps has, lead to the software profession as a whole being labeled as immature.

Coexistence between Exchange forests (without trusts…) -- Part 4: Preparing the US Exchange 2010 environment

$
0
0

Originally posted on: http://geekswithblogs.net/marcde/archive/2015/10/28/coexistence-between-exchange-forests-without-trustshellip----part-4-preparing.aspx

Create receive connector

Step 1: Open the exchange management console

Step 2: Click on Hub Transport under Server Configuration

Step 3: In the Actions pane click on New Receive Connector

Step 4: In the Introduction page, under Name, enter a name for the receive connector. From the drop down box under Select the intended use for this receive connector, select Internal. Click next.
image

Step 5: On the Remote Network Settings page, remove the default remote IP addresses and enter the IP address of the server(s) in the remote domain. Click Next.
image

Step 6: On the New Connector page, click New.

Step 7: On the Completion page, click Finish.

Step 8:Right click the newly created connector and select properties.

Step 9: Click the Permission Groups tab.
image

Step 10: Tick the box next to Anonymous Users on the permission groups tab and click Apply and OK.
image

? Note: Repeat this step for every server who will be receiving SMTP traffic from the other organization.

Create a send connector

Step 1: Open the exchange management console

Step 2: Click on Hub Transport under Organization Configuration

Step 3: Click on the Send Connectors Tab

Step 4: In the Actions pane click on New Send Connector

Step 5: Enter a name for the send connector. From the drop down menu under Select the intended use for this Send connector select Internal.

Step 6: In the address space pane, click on Add.

Step 7: In the SMTP Address Space window, under Address, enter the domain name for the organization mail will be routed to. Enter a cost for the send connector if applicable.

Step 8: In the network setting page, select Route mail through the following smart hosts and click Add

Step 9: In the Add smart host window, enter the IP address of the HUB transport server in the organization you will be routing the mail to and click ok.

? Note: It is recommended to add multiple HUB servers in this field (if available) for redundancy.

Step 10: Click Next.

Step 11: Leave the authentication settings on their defaults. Click Next.

Step 12: In the Source server page, verify the defaults and click Next.

? Note: It is recommended to add multiple source servers here (if available) for redundancy.

Step 13: In the New Connector window click New.

Step 14: In the Completion window review the results and click Finish.

 

Change accepted domain to internal relay

Step 1: Open the exchange management console.

Step 2: Click on Hub Transport under Organization Configuration.

Step 3: Click on the Accepted Domains tab.

Step 4: Right click the domain that needs to be changed and select properties.
image

Step 5: On the Properties tab, select the radio button for Internal Relay Domain.
image

Step 6: On the Properties tab, click OK.

 

Add internal relay domain for uk.sutherland.com

Step 1: Open the exchange management console

Step 2: Click on Accepted Domains under Organization Configuration

Step 3: In the Actions pane click on New Accepted Domain

Step 4: On the New accepted domain page, enter a name and the accepted domain (for example: contoso.com). Tick the radio button for Internal Relay.
image

Step 5: Click New.

 

External domain preparations

1.1.2.1 Create MX record for uk.toasterlabs.com

Not documented

 

Create external DNS records for uk.toasterlabs.com (autodiscover & mail)

Not documented

 

ArrowGreen

Coexistence between Exchange forests (without trusts…)  -- Part 3: Preparing the UK Exchange 2007 environment

ArrowGreenCoexistence between Exchange forests (without trusts…)  -- Part 5: Preparing the GALSync Server

Office 365: Authentication

$
0
0

Originally posted on: http://geekswithblogs.net/marcde/archive/2015/10/30/office-365-authentication.aspx

When we’re talking authentication the first thing that pops up in our minds is Active Directory. For years, active directory has been the staple identity provider for most companies and the foundational building block upon which most applications were built.

With Azure and O365, we need to think about the different authentication methods that could be. Are we going to an “all in the cloud” model? Federated identities? Hybrid active directory? Or maybe something else completely?

All in the Microsoft cloud

With the first possible solution we’re looking at an identity solution where nothing exists on premise. The most unlikely solution for the larger businesses, but if you’re a small or new business, this might be an option. You’re still in charge and you can still do, pretty much, the same things when it comes to authentication, but you’re hosting all your user information in the cloud with no servers on-prem.

clip_image002

1 - From the identity and authentication in office 2013 and O365 document

clip_image004

clip_image006

2 - From the Identity and authentication in office 2013 and O365 document

clip_image008

Living in a hybrid world

Most of the world does not live in an all Microsoft cloud only world. Companies have existing infrastructure, applications that need time to be ported and so much more items that restrict them from doing away with existing on-prem servers (and thank god for that or we would be out of a job!) that living in a hybrid world is necessary. But what does that look like?

Same-sign on

Behind door number one is the same-sign on option. Minimal on-prem infrastructure (a dirsync server is required) allows companies to leverage the “same sign-on” method. Historically the DirSync server would not sync end users password but we have that possibility now. OK, before anyone freaks out, let me clarify: Password Sync does not synchronize the password. It has no way of doing that. What it actually does is take the hash of the users password that exists in active directory, hashes that again, and syncs it up to Azure Active Directory. That way, Users can be authenticated using the same password they have in their on-prem active directory. But they will have to authenticate again. It is not single sign-on!

And yes, that’s a hash of a password hash. Hashception… (I really had to do that!)

Single sign-on

Single sign-on is what Microsoft preaches, but does require some extra infrastructure. For a start, you will need at least one ADFS server in order to authenticate against. Ideally you would have multiple so there is redundancy and you don’t lose your authentication infrastructure due to patching.

Authenticating against an ADFS server also means that users don’t need to re-enter their credentials when they are already logged in. Hence the name “Single Sign-On”.

Authentication flow with ADFS for an intranet user

clip_image010

1. An intranet user tries to access an application on Office 365, but hasn’t been authenticated before

2. The application redirects the user to Azure AD for authentication.

3. The user enters the username for the application and, because Azure AD knows ADFS has been set up, redirects that user to the ADFS server for authentication.

4. Since this is a Single sign-on scenario, and the user is working on a desktop which is domain-joined, ADFS issues a user token.

5. User token gets sent to the Intranet User.

6. The User token gets sent to Azure Active Directory, is validated and generates a new token for the application the user is trying to access.

7. The user can happily use the application.

Authentication flow with ADFS for an extranet user

clip_image012

1. Our extranet user tries to access the application for the very first time.

2. The application issues a redirect to Azure AD.

3. Once the redirect is processed, the user enters his or her credentials in the webpage. Azure AD, knowing the organization has ADFS enabled, issues a redirect to the ADFS server.

4. As the user is using the internet (Extranet) to access the ADFS server they hit the ADFS Proxy server which proxies all traffic to the ADFS server living in the internal network. Just like with the Intranet user the ADFS server issues a token.

5. The token gets sent to the user.

6. The token is passed on to Azure Active Directory, validated, a new token is created and passed down to the user.

7. The token is then used to authenticate against the application and the user can start working with vigor!

References:

  • Identity and authentication in Office 2013 and O365
    http://www.microsoft.com/en-us/download/details.aspx?id=38193

Raspberry Pi has lead me to this...

$
0
0

Originally posted on: http://geekswithblogs.net/MobileLOB/archive/2015/11/01/raspberry-pi-has-lead-me-to-this.aspx

So,   I’ve been playing around with various Raspberry Pi projects for a couple of years.

I’ve had a few hits (caller ID,  home automation, audio airplay) and I think only one miss (Time Capsule).

Whats been great, is that its been easy to have Linux machine that doesn’t matter too much if I get things wrong and it breaks,   its not mission critical or anything, and unrelated to what  I have todo with work.    

So I want to try and build a better Mac and PC backup solution,   have a proper continuous integration server for mobile projects and GIT server.   Working for a Microsoft partner, its also great to get a fresh perspective on what else is out there.   

So we’ve had an HP Proliant server (something like a http://news.driversdown.com/Server/200811/27-1061.html ) in our office just doing nothing for a while and I took it upon myself to see what could be done with this box.

 

First step,  as I’m familiar with Debian Linux  ( http://debian.org ) was to install that latest Jessie build.    This required me to burn a complete install DVD.   I needed todo this as the HP server has SCSI DVD and hard-disks,  so to get the necessary driver support thats what was required.

Install took about 15 minutes and I had a machine I could use.      Next step was to lock the machine away out of sight.  I have too many monitors on my desk already so, I needed this machine to just run unobtrusively.   I made sure I could SSH to the machine (which I should admit I called Beasty) away from my desk.

Beasty was just connected to the network and powered.   I needed todo the majority of work out of hours,  so I made sure I could SSH connect to the machine from home over our VPN connection to our office.   All good.   So back to the same sort of Raspberry PI experience; I now have a fast Linux machine that I can connect to out of sight;  we were ready.

I crave as easier life as I can muster (as I’m sure we all do).   The next step was to ensure it was easy to copy files back and forth from Beasty.   I’m using Macs and PCs all the time so I installed Netatalk for Mac file share support and Samba for Windows file sharing.  Netatalk 3 was a little bit of a trauma to get up and running but once done,  I could access my home directory and also using an external hard disk on the server use Beasty as a Time Machine for Mac backup.

I then installed CUPS which let me share to Macs and PC’s a couple of Zebra label printers.   This has the added bonus is that from iPhone, iPad I can now AirPrint Labels.   Lets face it, who doesn’t need that in their life….

 

So moving on from here.   I installed GIT.      This was a breeze.   GIT means I now have source control in Xcode on my Mac with a secured remote repository on Beasty.    I can work locally and then push changes up-to this team server.       I’m not sure if this is the way to go as our company is focused around Microsoft Team Foundation server,  so I am going to try and get GIT integration going with this and compare with the open source alternative.

 

So to recap for free ,I have a Mac/PC/iOS File and print server with cross platform source control solution built in about an hour.    

Right now I’m installing Gnome Window Manager to take me from command line management to something a little bit more 2015.  I’m blogging about all of this while I wait for the install to finish.   

 

I’ll keep you posted...

 


Coexistence between Exchange forests (without trusts…) -- Part 5: Preparing the GALSync Server

$
0
0

Originally posted on: http://geekswithblogs.net/marcde/archive/2015/11/02/coexistence-between-exchange-forests-without-trustshellip----part-5-preparing.aspx

Installing the MIM server Prerequisites

? Note: The exchange 2007 management tools need to be installed on the MIM server for it to be able to provision users correctly.

? Note: In order for the GALSync process to access the exchange 2010 environment the server(s) that will be used to create objects with will have to be added to the WinRM ‘Trusted Hosts’ list (Due to the lack of a two-way trust)

 

Installing prerequisites

import-module ServerManager

Install-WindowsFeature Net-Framework-Features,rsat-ad-powershell,Application-Server,Windows-Identity-Foundation,Server-Media-Foundation,Xps-Viewer –includeallsubfeature -restart -source d:\sources\SxS

WinRM Trusted hosts

Add the remote domain machine we'll be connecting to (exchange powershell) for provisioning to the trustedhosts list: “set-item -path WSMAN:\localhost\client\trustedhosts -value 'Exchange 2010 servername’ –concatenate”

In order to test if the server can access the Exchange 2010 environment use the following commands:

· $rs = new-pssession -conf microsoft.exchange -conn http://EX2010FQDN/PowerShell -auth kerberos -cred (get-credential)

· Invoke-Command $rs {get-recipient -ResultSize 1}

Prepare Active Directory for GALSync

? Note: Change the password in variable “$SP” below to match your password policy for service accounts.

import-module activedirectory

$sp = ConvertTo-SecureString "Pass@word1" –asplaintext –force

New-ADUser –SamAccountName MIMMA –name MIMMA

Set-ADAccountPassword –identity MIMMA –NewPassword $sp

Set-ADUser –identity MIMMA –Enabled 1 –PasswordNeverExpires 1

New-ADUser –SamAccountName MIMSync –name MIMSync

Set-ADAccountPassword –identity MIMSync –NewPassword $sp

Set-ADUser –identity MIMSync –Enabled 1 –PasswordNeverExpires 1

New-ADUser –SamAccountName SqlServer –name SqlServer

Set-ADAccountPassword –identity SqlServer –NewPassword $sp

Set-ADUser –identity SqlServer –Enabled 1 –PasswordNeverExpires 1

New-ADUser –SamAccountName BackupAdmin –name BackupAdmin

Set-ADAccountPassword –identity BackupAdmin –NewPassword $sp

Set-ADUser –identity BackupAdmin –Enabled 1 -PasswordNeverExpires 1

New-ADGroup –name MIMSyncAdmins –GroupCategory Security –GroupScope Global –SamAccountName MIMSyncAdmins

? Note: Replace the values in the below command with the appropriate values for the domain.

New-ADGroup –name MIMSyncOperators –GroupCategory Security –GroupScope Global –SamAccountName MIMSyncOperatorssetspn -S FIMSync/..local \MIMSync

New-ADGroup –name MIMSyncJoiners –GroupCategory Security –GroupScope Global –SamAccountName MIMSyncJoiners

New-ADGroup –name MIMSyncBrowse –GroupCategory Security –GroupScope Global –SamAccountName MIMSyncBrowse

New-ADGroup –name MIMSyncPasswordReset –GroupCategory Security –GroupScope Global –SamAccountName MIMSyncPasswordReset

Add-ADGroupMember -identity MIMSyncAdmins -Members Administrator

 

Configure the server security policy

? Note: This is necessary to allow them to run as services.

1. Launch the Local Security Policy program.

2. Navigate to Local Policies, User Rights Assignment.

3. On the details pane, right click on Log on as a service, and select Properties.

4. Click Add User or Group, and in User and group names, type corp\mimsync; corp\mimma; corp\SqlServer, click Check Names, and click OK.

5. Click OK to close the Log on as a service Properties window.

6. On the details pane, right click on Deny access to this computer from the network, and select Properties.

7. Click Add User or Group, and in the User and group names, type corp\MIMSync; corp\MIMService and click OK.

8. Click OK to close the Deny access to this computer from the network Properties window

9. On the details pane, right click on Deny log on locally, and select Properties.

10. Click Add User or Group, and in the User and group names, type corp\MIMSync; corp\MIMService and click OK.

11. Click OK to close the Deny log on locally Properties window.

12. Close the Local Security Policy window.

 

Install SQL server 2014 (if required)

? Note: Change the value of to match your netbios name for the active directory domain.

.\setup.exe /Q /IACCEPTSQLSERVERLICENSETERMS /ACTION=install /FEATURES=SQL,SSMS /INSTANCENAME=MSSQLSERVER /SQLSVCACCOUNT="\SqlServer" /SQLSVCPASSWORD="Pass@word1" /AGTSVCSTARTUPTYPE=Automatic /AGTSVCACCOUNT="NT AUTHORITY\Network Service" /SQLSYSADMINACCOUNTS="\Administrator"

 

ArrowGreen

Coexistence between Exchange forests (without trusts…)  -- Part 4: Preparing the US Exchange 2010 environment

ArrowGreenCoexistence between Exchange forests (without trusts…)  -- Part 6: Installing the MIM 2016 Synchronization Service (GALSync)

Performance Tests for Azure Web Apps

$
0
0

Originally posted on: http://geekswithblogs.net/jakob/archive/2015/11/02/performance-tests-for-azure-web-apps.aspx

Anyone that has been involved with setting up the infrastructure that is needed to perform on premise load testing of a realistic number of users knows how much work that is to both setup and to maintain. With Visual Studio Ultimate/Enterprise you needed to create a test rig by creating multiple machines and then installing a test controller and test agents on all the machines and configure them to talk to each other.

Another aspect of if is that the typical team don’t run load tests of their applications on a regular basis, instead it is done during certain periods or sprints during the lifecycle of the project. So the rest of time those machines aren’t used for anything, so basically they are just using up your resources.

Cloud Load Testing

With the introduction of Cloud Load Testing, that is part of Visual Studio Online, Microsoft added the possibility to use Azure for generating the load for your tests. This means that you no longer have to setup or configure any agents at all, you only need to specify the type of load that you want, such as the number of users and for how long the test should run. This makes it incredibly easy to run load tests, and you only pay for the resources that you use. You can even use it to test internal application running behind a firewall.

So, this feature has been around for a couple of years, but there has always been a problem with discoverability due to it being available only from inside the Visual Studio Online portal. So for teams that uses Azure for running web apps but are not using Visual Studio Online for their development, they would most likely never see or use this feature.

But back in September Microsoft announced the public preview of perfomance testing Azure Web Apps, fully integrated in the Azure Portal. It still needs a connection to a Visual Studio Online account, but as you will see this can easily be done as part of setting up the first performance test.


Let’s take a quick look at how to create and run a performance test for an Azure Web App.

 

Azure Web App Performance Test

The new Performance Test functionality is available in the Tools blade for your web app, so go ahead and click that.

image

 

The first time you do this, you will be informed that you need to either create a Visual Studio Account or link to an existing one. Here I will create a new one called jakobperformance.

Note that:

  • It must have a unique name since this will end up as <accountname>.visualstudio.com)
  • It does not mean that you have to use this account (or any other VSO account for that matter) for your development.

 

image


Currently the location must be set to South Central US, this is most likely only the case during the public preview.

 

Image result for nice icon

When you do this, you will receive a nice little email from Microsoft that includes a lot of links to more information
about how to get started with cloud load testing.

A simple thing really, but things like this can really make a difference when you are trying a new technology for the first time.

image 

 

So, once we have create or linked the VSO account we can go ahead and create a new performance test. Here is the information that you need to supply:

URL
The public URL that the performance test should hit. It will be default be set to the URL of the current Azure Web App, but you can change this.

Name
The name of this particular test run. As you will see, all test runs are stored and available in the Performance Test blade of your Azure Web App, so give it a descriptive name.

Generate Load From
Here you select from which region that the load should be generated from. Select the one that most closely represent the origin of your users.

User Load
The number of users that should hit your site. While this feature is in public preview you don’t have to pay for running these load tests, but there will some limits in how much load you can generate. You can contact Microsoft if you need to increase this limit during the preview period.

Duration (Minutes)
Specifies for how long (in minutes) that the load test should run

 

image

 

Once this is filled out, hit Run Test to start the load test. This will queue the performance test and then start spinning up the necessary resources in Azure for running the load test.

 

image

 

Clicking on the test run, you will see information start to come in after a short period of time, showing the number of requests generated and some key performance characteristics of how your application behaves under pressure.

 

image


Of course, this type of load testing doesn’t cover all you need in terms of creating realistic user load, but it is a great way to quickly hit some key pages of your site and see how it behaves. then you can move on and author more complex load tests using Visual Studio Enterprise, and run them using Azure as well.

 

Go ahead and try it out for yourself, it couldn’t be easier and during the public preview it is free of charge!

Dax Studio 2.3.2 released

$
0
0

Originally posted on: http://geekswithblogs.net/darrengosbell/archive/2015/11/03/dax-studio-2.3.2-released.aspx

This latest release includes a lot of UI polish features and we also now have support for SQL 2016 and Excel 2016 which is mainly an update to the installer and some changes to the Query Plan and Server Timings features to deal with some changes to the xmlSql that comes back in the trace events.

Following the theory that a picture is worth a thousand words – below are screenshots of the changes in this release.

The File –> Open menu now includes a list of recently opened files.

image

For performance testing you can now set the run button to always doe a clear cache before executing a query. This mode is easily selectable using the new arrow menu on the run button.

image

The model dialogs all have updated styling and now including a shaded overlay so that the active portions of the screen are clearly visible.

image

An options pane has been added to the File menu for setting global program options

image

A Query History pane has been added which records all of the queries run by Dax Studio. If you have the Server Timings feature enabled the Server duration and the FE / SE timing are also tracked. You can double click on an item in the query history to insert it back into the editor. This is great for performance tuning as you can easily see which variation of the query was fastest and returned the expected number of rows and then bring it back into the editor.

image

The metadata pane now loads asynchronously. In earlier versions the loading of the metadata pane was a blocking operation and the user interface could go unresponsive for a short time while loading large models. Now the metadata is loaded on a background thread so the interface remains responsive and the pane that is updating is greyed out to indicate that the load is still in progress. .

image

The new “Define Measure” feature, which is a right-click option on a calculated measure, is a great way to either see how a measure was defined without opening up the whole model. Or you can use it as a starting point to test some variations on the logic.

SNAGHTML44d892ee

There are also a number of small bug fixes and tweaks and a number of issues that were raised on codeplex that have been fixed (we always tag closed issues with the release they were fixed in)

Coexistence between Exchange forests (without trusts…) -- Part 6: Installing the MIM 2016 Synchronization Service (GALSync)

$
0
0

Originally posted on: http://geekswithblogs.net/marcde/archive/2015/11/04/coexistence-between-exchange-forests-without-trustshellip----part-6-installing.aspx

Step 1: In the unpacked MIM installation folder, navigate to the Synchronization Service folder.

Step 2: Run the MIM Synchronization Service installer. Follow the guidelines of the installer and complete the installation.

Step 3: In the welcome screen – click Next.
image

Step 4: Review the license terms and if you accept them, click Next.

Step 5: In the feature selection screen click Next.
image

Step 6: In the Sync database configuration screen, select:

1. The SQL Server is located on: This computer.

2. The SQL Server instance is: The default instance.

? Note: Adapt these values if you decide to host the SQL database on another server.
image

Step 7: Configure the Sync Service Account according to the account you created earlier:

      1. Service account: MIMSync
      2. Password: Pass@word1

? Note: This is the password that was assigned to variable “$SP” earlier.

3. Service Account Domain or local computer name: domain
image

Step 8: Provide MIM Sync installer with the relevant security groups:

1. Administrator = Corp\MIMSyncAdmins

2. Operator= corp\MIMSyncOperators

3. Joiner = corp\MIMSyncJoiners

4. Connector Browse = corp\MIMSyncBrowse

5. WMI Password Management= corp\MIMSyncPasswordReset

Step 9: In the security settings screen, check Enable firewall rules for inbound RPC communications, and click Next.
image

Step 10: Click Install to begin the installation of MIM Sync.

1. A warning concerning the MIM Sync service account may appear – click OK.

2. MIM Sync will now be installed.
image

Step 11: A notice on creating a backup for the encryption key will be shown – click OK, then select a folder to store the encryption key backup.
image

Step 12: When the installer successfully completes the installation, click Finish.
image

Step 13: You will be prompted to log off and log on for group membership changes to take effect. Click Yes to logoff.

 

ArrowGreen

Coexistence between Exchange forests (without trusts…)  -- Part 5: Preparing the GALSync Server

ArrowGreenCoexistence between Exchange forests (without trusts…)  -- Part 7: Creating Synchronization Agents

Access to Content and Structure

$
0
0

Originally posted on: http://geekswithblogs.net/simonh/archive/2015/11/04/access-to-content-and-structure.aspx

One of the best tools for managing the content and overall structure of a site collection is the Content and Structure tool accessible from Site Settings > Site Administration > Content and structure.

To make this option available, the SharePoint Server Publishing Infrastructure site collection feature needs to be activated.

There are also three options which need to be enabled against the permission level: Manage PermissionsManage Web Site, and Add and Customize Pages.  These permission options allow you to move/copy items between sites and libraries.  Contribute permission to enough to access the Content and Structure tool

Find out all the changes / alterations to SQL database objects

$
0
0

Originally posted on: http://geekswithblogs.net/Vipin/archive/2015/11/05/find-out-all-the-changes--alterations-to-sql-database.aspx

This script when run under user who has access to 'sys.traces' will be able to sql tables, stored procedures which has been modified.

DECLARE @filename VARCHAR(255) 
SELECT @FileName = SUBSTRING(path, 0, LEN(path)-CHARINDEX('\', REVERSE(path))+1) + '\Log.trc'  
FROM sys.traces   
WHERE is_default = 1;  
print @FileName
SELECT gt.HostName, 
       gt.ApplicationName, 
       gt.NTUserName, 
       gt.NTDomainName, 
       gt.LoginName, 
       gt.SPID, 
       gt.EventClass, 
       te.Name AS EventName,
       gt.EventSubClass,      
       gt.TEXTData, 
       gt.StartTime, 
       gt.EndTime, 
       gt.ObjectName, 
       gt.DatabaseName, 
       gt.FileName, 
       gt.IsSystem
FROM [fn_trace_gettable](@filename, DEFAULT) gt 
JOIN sys.trace_events te ON gt.EventClass = te.trace_event_id 
WHERE EventClass in (164) AND gt.EventSubClass = 0

ORDER BY StartTime DESC; 

Kendo Grid MVC Wrapper Automatic Column Configuration

$
0
0

Originally posted on: http://geekswithblogs.net/sdorman/archive/2015/11/05/kendo-grid-mvc-wrapper-automatic-column-configuration.aspx

The Telerik Kendo Grid control is really powerful, especially when combined with the MVC wrappers. One of the things that make the MVC wrapper so useful is the ability to automatically (and easily) generate data-bound columns. It’s a single line of code:

 

.Columns(columns => columns.AutoGenerate(true))

 

The code behind AutoGenerate(true) understands some of the System.ComponentModel.DataAnnotations attributes. Specifically, it knows how to automatically configure the grid column for these attributes:

 

AttributeDescription
CompareCompares two properties.
CreditCardSpecifies that a data field value is a credit card number.
CustomValidationSpecifies a custom validation method that is used to validate a property.
DataTypeSpecifies the name of an additional type to associate with a data field.
DisplayProvides a general-purpose attribute that lets you specify localizable strings for types and members of entity partial classes.
DisplayColumnSpecifies the column that is displayed in the referred table as a foreign-key column.
DisplayFormatSpecifies how data fields are displayed and formatted by ASP.NET Dynamic Data.
EditableIndicates whether a data field is editable.
EmailAddressValidates an email address.
EnumDataTypeEnables a .NET Framework enumeration to be mapped to a data column.
FileExtensionsValidates file name extensions.
FilterUIHintRepresents an attribute that is used to specify the filtering behavior for a column.
MaxLengthSpecifies the maximum length of array or string data allowed in a property.
MinLengthSpecifies the minimum length of array or string data allowed in a property.
PhoneSpecifies that a data field value is a well-formed phone number using a regular expression for phone numbers.
RangeSpecifies the numeric range constraints for the value of a data field.
RegularExpressionSpecifies that a data field value in ASP.NET Dynamic Data must match the specified regular expression.
RequiredSpecifies that a data field value is required.
ScaffoldColumnSpecifies whether a class or data column uses scaffolding.
StringLengthSpecifies the minimum and maximum length of characters that are allowed in a data field.
UIHintSpecifies the template or user control that Dynamic Data uses to display a data field.
UrlProvides URL validation.

 

What’s nice about this support is that you can use these attributes to adorn your model properties and declaratively describe some of the metadata about the column.

The problem, though, is that you can’t also set the Kendo Grid specific properties, such as column width, if the column is locked or not, and if it should be included in the column menu (to name just a few).

Fortunately, we can hook into the AdditionalValues dictionary of the Metadata property on a data-bound column (which is of type Kendo.Mvc.UI.GridBoundColumn<TModel, TValue>). This property holds an instance of a System.Web.Mvc.ModelMetadata (more specifically an instance of a CachedModelMetadata<TPrototypeCache>) object, which has all of the metadata related attributes defined on the properties of the model and is the key to our solution of providing automatic column configuration based on data annotation attributes. To do this, we simply define our own attribute and implement the IMetadataAware interface. The runtime will handle everything for us and our new attribute will be added to the AdditionalValues dictionary.

I created a GridColumnAttribute to allow all of the additional Kendo specific properties to be set.

 

using System;
using System.Web.Mvc;

namespace Cadru.Web.KendoExtensions
{
    public class GridColumnAttribute : Attribute, IMetadataAware
    {
        public const string Key = "GridColumnMetadata";

        public bool Hidden { get; set; }

        public bool IncludeInMenu { get; set; }

        public bool Lockable { get; set; }

        public bool Locked { get; set; }

        public int MinScreenWidth { get; set; }

        public string Width { get; set; }

        public void OnMetadataCreated(ModelMetadata metadata)
        {
            metadata.AdditionalValues[GridColumnAttribute.Key] = this;
        }
    }
}

 

Now, we can decorate our model with the new attribute:

 

public class EmployeeModel
{
    [Editable(false)]
    [GridColumn(Width = "100px", Locked = true)]
    public string EmployeeID { get; set; }

    [GridColumn(Width = "200px", Locked = true)]
    public string EmployeeName { get; set; }

    [GridColumn(Width = "100px")]
    public string EmployeeFirstName { get; set; }

    [GridColumn(Width = "100px")]
    public string EmployeeLastName { get; set; }
}

 

However, that’s only part of the solution. We still need to tell the Kendo Grid that it needs to do something with this new attribute. To do this we can use the overload for the AutoGenerate method which takes an Action:

 

.Columns(columns => columns.AutoGenerate(c => GridColumnHelpers.ConfigureColumn(c)))

 

The ConfigureColumns method looks like

 

using Kendo.Mvc.UI;
using System;
using System.Web.Mvc;

namespace Cadru.Web.KendoExtensions
{
    public static class GridColumnHelpers
    {
        public static void ConfigureColumn<T>(GridColumnBase<T> column) where T : class
        {
            CachedDataAnnotationsModelMetadata metadata = ((dynamic)column).Metadata;
            object attributeValue = null;
            if (metadata.AdditionalValues.TryGetValue(GridColumnAttribute.Key, out attributeValue))
            {
                var attribute = (GridColumnAttribute)attributeValue;
                column.Width = attribute.Width;
                column.Locked = attribute.Locked;
                column.Hidden = attribute.Hidden;
                column.IncludeInMenu = attribute.IncludeInMenu;
                column.Lockable = attribute.Lockable;
                column.MinScreenWidth = attribute.MinScreenWidth;
            }
        }
    }
}

This takes advantage of the fact that the method is being called in the context of automatically generating data-bound columns, so it’s able to take the column and cast it to a dynamic object in order to reference the Metadata property. We have to do this because the IGridBoundColumn doesn’t expose the Metadata property and we can’t cast it directly to a GridBoundColumn<TModel, TValue> because (among other reasons) we don’t know the type for TValue. That leaves us with casting it to dynamic and letting the dynamic dispatcher figure out how to give us back the requested property. From there, we look to see if the AdditionalValues dictionary contains our attribute, and if it does we then set the column properties to their respective values as specified by the attribute. We now have the ability to automatically configure the additional column properties using metadata specified in our model while still automatically generating the data-bound columns.


Azure Application Component Deployment

$
0
0

Originally posted on: http://geekswithblogs.net/tmurphy/archive/2015/11/05/azure-application-component-deployment.aspx

I think I can! I think I can!

One of the aspects of Azure development that I have found the least amount of information written about is the deployment of your application components.  This is especially the case when it comes to ALM (Application Lifecycle Management) approaches are considered.  As with most things you get the WSYWG demo, but not how things should actually be done in an enterprise environment.  This post will try to cover as many deployment approaches as possible.  While it won’t be comprehensive it will give you enough alternatives to start coming up with your own processes.

Visual Studio

This is usually where every demo of Azure development begins and ends.  You right your web app or cloud service and “Poof!”, your solution magically ends up in your Azure environment.  In this scenario you use the publish option of the project context menu to open a wizard that allows you to connect to an Azure subscription and define where you what site or service you want to publish too.  The process is even simpler if you defined these parameters when you created the project.

image

Zip Files

While creating web jobs I came across this method of deployment.  You zip up your web job solution and go to Web Jobs under Settings for your Web App and you will see the blade shown below.  You can learn more about this in my earlier post here.

image

Packages

This is the point where we get to a manageable deployment process.  Once you have finished coding your Cloud Service you can go to the project in Visual Studio and create a deployment package using the context menu shown below.

image

Once you have that visit your cloud service in the portal and select the Update button.  The blade in the figure below will appear and allow you to upload the package and the appropriate configuration.  Azure will then complete the deployment from the provided resources.

image

FTP

If you like to be hands on FTP may be the deployment approach that is right for your team.  It should be familiar to anyone who is used to manually copying their files to a web server.  Once you go into Deployment Credentials for your Web App as shown below and then grab the FTP host name from the properties page you will be able to navigate straight to the wwwroot directory and copy your application files to the server.

image

From Source Control

Any of us who have been in this game for a couple of decades know that automated builds and deployments save a lot of manual deployment mistakes.  Thankfully we have a number of options for deploying from source control to Azure.  Rather than explain each I’ll put a couple of links here that do a good job of explaining the process of setting these scenarios up..

Visual Studio Online

TFS On Premis

Summary

This post gives just a quick taste of Azure deployments from the quick and dirty options to methods better suited for enterprise developments processes.  Be sure to try as many of them out as possible to understand they fit with your development team.

The future for Microsoft (predications)

$
0
0

Originally posted on: http://geekswithblogs.net/sdorman/archive/2015/11/05/the-future-for-microsoft-predications.aspx

Nearly two years ago, I wrote a post called The future for Microsoft. This was a “predications” post based on what I was seeing in the industry and in Microsoft, with a some judicious “reading between the lines” and speculation on my part.

To quickly recap those predications, I said:

  • Microsoft will change how it reports financially.
  • Microsoft will become a “consumer-focused enterprise company.”
  • Windows will converge into a single code base capable of running on any platform.
  • Application development will converge into allowing developers to maintain a single code base for an app that will run on any device capable of running Windows.
  • Release frequency will dramatically increase.
  • The “modern” user interface is here to stay.

Since that was a predications post, let’s take a look at where Microsoft is just two years later. From my perspective, I was right on 5 out of 6 predications.

In September 2015, Microsoft officially announced that they were changing their financial reporting structure, saying that beginning in fiscal year 2016, revenue and operating income will be reported based on three new operating segments. While I was wrong about the actual structure, I was right about the fact that Microsoft would change how it reports. This structure, and the announcements since Satya Nadella became CEO, also reflects how Microsoft is becoming a “consumer-focused enterprise company”. This mean then, and still does mean, that Microsoft is an enterprise company trying to make a larger consumer presence. This has played out numerous different ways, most most predominantly in the Surface and Lumia lines. Both of these devices are very enterprise-friendly, and leverage Microsoft’s strength in that space, but are also making huge in-roads into the consumer space. Granted, the Lumia line has been very slow to gain traction in the United States, but it seems with Windows 10 that’s starting to change.

Speaking of Windows 10, it’s Microsoft’s first operating system that is truly universal. Windows 10 will run on phones, tablets, laptops, PCs, Xbox One, IoT devices (like Raspberry Pi), and potentially a whole host of other devices (like HoloLens and Band). This takes the mess of different operating system code bases and consolidates them into a single code base. It allows Microsoft to leverage the best talent and ideas from these different products into a single operating system. It also gives the end user a consistent and familiar experience across all of their devices. Windows 10 is a game changer for Microsoft and already has a lot of traction.

Windows 10 application development has also converged with the notion of Windows Universal Apps. While this hasn’t played out completely, it is showing huge promise. As an app developer, you no longer target devices you target “families”, which effectively allow you to easily specify entire groups of devices that your app supports. It’s one code base and “intelligent” user interface controls that allow your application to scale up or down across devices.

Although the user interface for Windows 10 has shifted into a different version of the modern user interface introduced by Windows Phone, it’s still very much alive and expanding onto all the different devices Windows 10 supports. For as much ridicule as Microsoft received for the Windows Phone UI, it’s now ben copied by Apple and Google. It’s found it’s way onto ATM machines, cash registers, and many other places. It’s definitely here to stay.

The one prediction I made that I’m not sure of is a faster release frequency. While Microsoft has increased release frequency for some products, it hasn’t happened across the board and it’s inconsistent, both across products and even within some product lines. The Windows 10 previews, with a “fast” and “slow” ring partially support this but it’s also inconsistent between Windows 10 desktop and Windows 10 phone releases. That may change once both products are released, but for now they are on different release schedules and frequencies.

Overall, I think my predications were pretty accurate and things have played out pretty much like I anticipated. The future for Microsoft is exciting. I predicated that it will be the end of 2016 until we’re completely there. I think that’s still an accurate predication. By the end of 2015, we’ll have Windows 10 running on phones, tablets, laptops, desktops, and the Xbox One. We’ll have Windows Universal apps that can run on any (or all) of those devices. We’ll still have devices that aren’t running Windows 10 yet, like the Microsoft Band 2, but by the end of 2016 I think all of these devices will be running Windows 10.

Error: Local Database Runtime error occurred. Error occurred during LocalDB instance startup: SQL Server process failed to start.

$
0
0

Originally posted on: http://geekswithblogs.net/pabothu/archive/2015/11/07/error-local-database-runtime-error-occurred.-error-occurred-during-localdb.aspx

This was a common error which we get when we are using LocalDB for a website running under IIS. Everything will work fine if we run the website under IIS Express i.e. when we run through Visual Studio. Once we deploy the site into actual IIS we get an error as shown below.

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 50 - Local Database Runtime error occurred. Error occurred during LocalDB instance startup: SQL Server process failed to start. )

This error message will not give a clear idea but we will get additional information in Windows Event log. Looking in the Application section under Windows Logs we find the following message:

Cannot get a local application data path. Most probably a user profile is not loaded. If LocalDB is executed under IIS, make sure that profile loading is enabled for the current user.

This message clearly says, the problem we're facing is that the user profile needs to be loaded.

To fix this we need to edit applicationHost.config file which is usually located in C:\Windows\System32\inetsrv\config. Enabling loadUserProfile is not enough to fully load user profile, we also need to enable setProfileEnvironment, i.e. set to true.

<applicationPools>
    <add name="DefaultAppPool" />
    <add name="Classic .NET AppPool" managedRuntimeVersion="v2.0" managedPipelineMode="Classic" />
    <add name=".NET v2.0 Classic" managedRuntimeVersion="v2.0" managedPipelineMode="Classic" />
    <add name=".NET v2.0" managedRuntimeVersion="v2.0" />
    <add name=".NET v4.5 Classic" managedRuntimeVersion="v4.0" managedPipelineMode="Classic" />
    <add name=".NET v4.5" managedRuntimeVersion="v4.0" />
    <applicationPoolDefaults managedRuntimeVersion="v4.0">
        <processModel identityType="ApplicationPoolIdentity" loadUserProfile="true" setProfileEnvironment="true" />
    </applicationPoolDefaults>
</applicationPools>

Coexistence between Exchange forests (without trusts…) -- Part 7: Creating Synchronization Agents

$
0
0

Originally posted on: http://geekswithblogs.net/marcde/archive/2015/11/09/coexistence-between-exchange-forests-without-trustshellip----part-7-creating.aspx

US GALMA – Exchange 2010

Step 1: Open the “Synchronization Service Manager”.

Step 2: Click on the “management Agents Button.

Step 3: From the Actions menu, click Create.

Step 4: In “the management agent designer”, From the drop down list, select “Active Directory global address list (GAL)”. Enter a name to identify the management agent by & click “next”.

image

Step 5: On the “Configure Directory Partitions” pane, tick the directory partition displayed and click “Containers”.

Step 6: Check all containers that need to be synchronized to MIM, including the container you will use to store the synchronized items in (Destination for the mail contacts). Click “OK” & “next”

Step 7: On the “configure GAL” page, click “Target”, followed by “Container” and select the OU where the mail contacts from the other domain(s) will be stored. Click “OK”.

Step 8: On the “Configure GAL” page, click source, followed by “Add Containers”. Tick the containers from which you want to import objects in to MIM. Click “OK”

image 

Step 9: Click “OK”.

image

Step 10: On the “configure GAL” page, under “Exchange Configuration”, click “Edit”.

 

Step 11: In "Edit SMTP Mail Suffix", enter all email suffixes that will exist on objects in the exchange 2010 environment that need to be synced in the form of “@domain.com” and click “add”. When done click “OK”.

image
 
image
 

Step 12: On the “configure GAL” page, click “Next”.

image
 

Step 13: On the “Configure Provisioning Hierarchy”, click “Next”.

image
 

Step 14: On the “Select Object Types” page, click “Next”.

image
 

Step 15: On the “Select Attributes” page, click “Next”.

image
 

Step 16: On the “Configure Connector Filter” page, click “Next”.

 
image 
  

Step 17: On the “Configure Join and Projection Rules” page, click “Next”.

 
image 
  

Step 18: On the “Configure Attribute Flow” page, click “Next”.

image
 

Step 19: On the “Configure Deprovisioning” page, click “Next”.

image
 

Step 20: On the “Configure Extensions” page, click “Exchange 2010 RPS URI”.

image
 

Step 21: Enter the exchange 2010 remote powershell url in the following format:
http://exchange2010FQDN/PowerShell

? Note: This is the Exchange 2010 server which was previously added to the
TrustedHosts list for WinRM

 
image 
  
   

Step 22: Click “finish”.

 

UK GALMA – Exchange 2007

Step 1: Open the Synchronizations Service Manager.

Step 2: Click on the “management Agents Button.

Step 3: From the Actions menu, click Create.

Step 4: In “the management agent designer”, From the drop down list, select “Active Directory global address list (GAL)”. Enter a name to identify the management agent by & click “next”.

image

Step 5: On the “Configure Directory Partitions” pane, tick the directory partition displayed and click “Containers”.

Step 6: Check all containers that need to be synchronized to MIM, including the container you will use to store the synchronized items in (Destination for the mail contacts). Click “OK” & “next”

Step 7: On the “configure GAL” page, click “Target”, followed by “Container” and select the OU where the mail contacts from the other domain(s) will be stored. Click “OK”.

image

Step 8: On the “Configure GAL” page, click source, followed by “Add Containers”. Tick the containers from which you want to import objects in to MIM. Click “OK”

image

image

image

 

Step 9: Click “OK”.

image
 

Step 10: On the “configure GAL” page, under “Exchange Configuration”, click “Edit”.

image

 

Step 11: In "Edit SMTP Mail Suffix", enter all email suffixes that will exist on objects in the exchange 2010
environment that need to be synced in the form of “@domain.com” and click “add”. When done click “OK”.

image
 

Step 12: On the “configure GAL” page, click “Next”.

image
 

Step 13: On the “Configure Provisioning Hierarchy”, click “Next”.

image
 

Step 14: On the “Select Object Types” page, click “Next”.

image
 

Step 15: On the “Select Attributes” page, click “Next”.

image
 

Step 16: On the “Configure Connector Filter” page, click “Next”.

 
image 
  

Step 17: On the “Configure Join and Projection Rules” page, click “Next”.

 
image 
  

Step 18: On the “Configure Attribute Flow” page, click “Next”.

image
 

Step 19: On the “Configure Deprovisioning” page, click “Next”.

image
 

Step 20: On the “Configure Extensions” page, click “Next”.

? Note: For Exchange 2007 there is no need to enter any information in the field as we installed
the exchange 2007 management tools. Without them MIM would be unable to provision objects
in the 2007 forest.

image
 
ArrowGreen

Coexistence between Exchange forests (without trusts…)  -- Part 6: Installing the MIM 2016 Synchronization Service (GALSync)

ArrowGreenCoexistence between Exchange forests (without trusts…)  -- Part 8: Enabling Provisioning

Architecting UI Automation Projects for Maintainability

$
0
0

Originally posted on: http://geekswithblogs.net/Aligned/archive/2015/11/10/architecting-ui-automation-projects-for-maintainability.aspx

I’ve been on a team now for a year+ and we’ve been using Selenium to automate acceptance tests before we consider a feature as completed. We’ve caught many regression issues that our Jasmine unit tests haven’t (which sometimes is a gap in our Jasmine tests, and sometimes is just a Knockout binding issue) and have avoided QA picking up buggy software for manually testing.

I was reading Specifications by Example Chapter 9 “Automating validation without changing specifications”. The book deserves its own article, several actually, is worth the time to read (and discuss with others) and we aren’t practicing it on my current team, but I’d like to implement/introduce ideas from it and also apply things as I can. He interviewed many teams and combines a lot of lessons learned, so that is very valuable.  We’ve learned some of the same lessons that are pointed out in the book about treating automation code like you would treat the application code (re-factor, follow good patterns, etc).

See my code that I started for a presentation I did at South Dakota Code Camp in Sioux Falls, SD on November 7th, 2015. This example uses the ASP.Net file new project MVC, automates the register action and verifies that it works

Test Automation Layer

Methods in this layer are in the [TestMethod] methods (in MSTest) and don’t have any direct interactions to the DOM.

Example: in the RegisterTest.cs

[TestMethod]
[TestCategory(TestCategories.Registration)]publicvoid UserCanRegisterTest()
{this.CurrentBrowserManager =  new BrowserManager();this.CurrentBrowserManager.Launch(BaseUri);
    var homePage = new HomePage();// we end up back on the home pagestring userName = "ben_" + DateTime.Now.Ticks + "@jump.com";
    homePage = homePage.RegisterUser(userName, "Pa$$word1");
    var helloMessage = homePage.GetAuthenticatedHeaderMessage();
    Assert.IsTrue(helloMessage.Contains(userName));
}

Technical / mappings

Do all of the mappings to the DOM inside of classes.

Notice the private methods the return an IWebElement such as the GetLoginLink(). Keeping the FindElement calls in methods makes it much easier to update, find trouble spots, and maintain. We’ve also found that you should not let the IWebElement leak outside of the class. Return object wrappers instead.

Example: HomePage.cs

/// <summary>/// UI Mapping for the Home Page. This is a wrapper for all UI interactions. /// </summary>publicclass HomePage : BaseMappingPage
{public HomePage Login(string username, string password)
    {this.GetLoginLink().Click();
        var loginPage = new LoginPage();
        loginPage.Login(username, password);returnnew HomePage();
    }private IWebElement GetLoginLink()
    {returnthis.Driver.FindElement(By.Id("loginLink"));
    }public HomePage RegisterUser(string userName, string password)
    {this.GetRegisterLink().Click();
        var registerPage = new RegisterPage();return registerPage.RegisterUser(userName, password);
    }private IWebElement GetRegisterLink()
    {returnthis.Driver.FindElement(By.Id("registerLink"));
    }publicstring GetAuthenticatedHeaderMessage()
    {
        var element = this.Driver.FindElement(By.Id("auto-AuthenticatedHeaderHello"));return element == null ? string.Empty : element.Text;
    }
}

Workflow Methods

It is very convenient to be able to call homePage.RegisterUser in the test method, instead of having to call all the steps in the TestMethod. It keeps things cleaner, you can re-use the steps, and changes can be made easier. Treat your test code like production code and follow good programming practices.

Example: RegisterPage.RegisterUser

/// <summary>/// Register the user with the given username and password./// Redirected to homepage after success./// </summary>/// <param name="userName"></param>/// <param name="password"></param>/// <returns></returns>public HomePage RegisterUser(string userName, string password)
{this.GetEmailInput().SendKeys(userName);this.GetPasswordOneInput().SendKeys(password);this.GetConfirmPasswordInput().SendKeys(password);this.GetSubmitButton().Click();// NOTE: building in a loading indicator and waiting for the div to // be removed from the page will help you avoid timing issues in your tests// for example: the UI may run faster than your web server and browser processes// and it will try to click on a link that isn't loaded yet.returnnew HomePage();
}

Driver, Browser, Tools Project

Use base classes and other classes to handle the WebDriver and the browser interactions.

Example: I’m extending from BaseMappingPage, this can grow as needs arise.

publicclass BaseMappingPage
{protected RemoteWebDriver Driver => BrowserManager.Driver;
}
Example 2: BrowserManager has a static property of the Driver. I used static to avoid having to pass it in to every class. There may be a better way here.
/// <summary>/// Manage the browser instances for Selenium tests./// </summary>publicclass BrowserManager
{publicstatic RemoteWebDriver Driver { get; private set; }/// <summary>/// Launch must be called in order to populate the browser and open it./// </summary>/// <param name="baseUri"></param>/// <param name="browserType"></param>publicvoid Launch(string baseUri, BrowserType browserType = BrowserType.Firefox)
    {
        Driver = BrowserDriverFactory.CreateDriver(browserType);
        Driver.Navigate().GoToUrl(baseUri);
    }publicvoid Quit()
    {
        Driver.Quit();
    }
}

Avoid Thread.Sleep

We’re using KnockoutJs to create dynamic DOM and sometimes the test fails because the element is there yet or it tries to click before it is ready. The first approach was to add Thread.Sleep(2000), but that has a load of problems. What if it takes 2500ms? What if it takes 500ms? Then you’re slowing down your already long running test run. It’s better to write a for loop that checks for a loading indicator to disappear. It turns out that showing loading or working indicators is good for users when lag or slower connections are reality.

“Automate Below the Skin”

UI tests take longer to run and more work to maintain. Sometimes you can hit the API directly and avoid having to click on the UI button.

You should have more unit tests than UI/acceptance tests.

Remember the testing pyramid:

pyramid

(image from http://www.ontestautomation.com/tag/test-automation-pyramid/)

 

Hopefully these hints will get you going and avoid some of the pitfalls we ran into in the last few years.

Viewing all 6441 articles
Browse latest View live