Quantcast
Channel: Geekswithblogs.net
Viewing all 6441 articles
Browse latest View live

Set Up a Virtual Test Device (WEC7 or WEC2013) on Windows 10

$
0
0

Originally posted on: http://geekswithblogs.net/DougMoore/archive/2015/10/11/set-up-a-virtual-test-device-wec7-or-wec2013-on.aspx

clip_image002

The following links from Microsoft are close but not currently up-to-date with respect to Hyper-V and getting set up on a Windows 10 host computer / development system.

Set Up a Virtual Test Device (Compact 2013)

Use the Sample Virtual Device (Compact 2013)

Use Hyper-V to create a virtual machine (Compact 2013)

This blog entry will contain a short, concise, list of steps to set everything up on your development system.

Starting with the information contained in the links above, from Microsoft, and adding the information I found at Windows 10 Forum:

Hyper-V virtualization - Setup and Use in Windows 10

Below is a complete summary of the steps involved.

Turn on Hyper-V

1. Control Panel, Programs and Features, Turn Windows features on or off

2. Check Hyper-V (includes Hyper-V Management Tools and Hyper-V Platform)

3. Click OK

4. Hyper-V will install and you will be prompted to restart system.

5. Close all programs and restart system.

With the next steps we will be creating a single VM named CEPC.

This VM will be mounting the sample virtual hard disk supplied with the CEPC BSP found in Windows Embedded Compact 7.

There is a known issue with the sample virtual hard disk supplied with the CEPC BSP found in Windows Embedded Compact 2013, the bootloader settings cannot be saved.

This issue may be resolved in a future WEC2013 QFE but, in the meantime, the work around is to use the WEC7 sample virtual hard disk, with its CEPC bootloader, for loading and running and debugging both, WEC7 and WEC2013, OS images.

Since this one CEPC VM will end up working with both OS, WEC7 and WEC2013, there really is no reason for creating VMs with OS specific naming, a VM simply named CEPC will do.

The first thing we need to do is stage the CEPC sample virtual hard disk in our Hyper-V working directory.

We don’t want to use the sample virtual hard disk where it lives just in case Microsoft updates it in a future QFE.

The virtual hard disk file will be modified as we run the VM and use the file so we want to make ourselves a copy first, and use that copy.

For this exercise I will be using “C:\Virtual Machines\CEPC” as my base directory for this setup, but you can choose any location that suits you, just be consistent when using it as you follow the procedures below.

Stage Sample CEPC Virtual Hard Disk

1. Copy contents of folder:
WINCE700\platform\VirtualPC\VM
to:
C:\Virtual Machines\CEPC

a. Cevm.vmc

b. hd0_sample.vhd

c. vpc_bootce.vfd

Create Virtual Machine

1. Hyper-V Manager, New, Virtual Machine…

2. New Virtual Machine Wizard

a. Specify Name and Location:

i. Name: CEPC
(This one VM will work with WEC7 and WEC2013 so there is really no good reason for creating VM with OS specific naming.)

ii. Store the virtual machine in a different location: C:\Hyper-V

b. Specify Generation: Generation 1
(Do not select Generation 2 or you will not be able to select the CEPC sample virtual hard disk, which is of type VHD. Only VHDX virtual hard disk types are allowed to be used with Generation 2 Hyper-V VM.)

c. Assign Memory: Startup memory: 512 MB, Use Dynamic Memory for this virtual machine.
(Use of Dynamic Memory is required else WEC OS launch will fail.)

d. Configure Networking (skip, click Next, will set up later)

e. Create Virtual Hard Disk: Use an existing virtual hard disk: C:\Virtual Machines\CEPC\hd0_sample.vhd

f. Finish

If you run the CEPC VM at this point in the setup process you will see the bootloader prompting, and counting down, “Hit space to enter configuration menu 5…”, 4, 3, 2, 1.

Hit the space bar during these prompts to enter the bootloader menu.

If you hit 1 for, [1] Show Current Settings, you will see that Main: Boot source and KITL device both list “(NULL)” for their current values.

We need to create a virtual switch and set up the Legacy Network Adapter in our CEPC VM Settings in order to get the bootloader happy with its network adapter.

Create Virtual Switch

1. Hyper-V Manager, Virtual Switch Manager…

2. New virtual network switch, External, Create Virtual Switch

3. Name: SharedNetwork

4. External network: Intel(R) 82579LM Gigabit Network Connection
(or the wired network adapter found on your development system that has DHCP access)

5. Allow management operating system to share this network adapter

6. OK

7. Yes @ Pending change may disrupt network connectivity

Create Legacy Network Adapter in CEPC VM Settings

1. Hyper-V Manager, CEPC, Settings…

2. Select, Network Adapter (Not connected), in the Hardware listing in the panel on right-hand side of the Settings dialog

3. Click on the Remove button
(To, “Use a legacy network adapter instead of this network adapter to perform a network-based installation of the guest operating system or when integration services are not installed in the guest operating system.”)

4. Apply

5. Select, Add Hardware, in the Hardware listing in the panel on the right-hand side of the Settings dialog

6. Select, Legacy Network Adapter, and click on the Add button

7. Virtual switch: SharedNetwork

8. OK

If you run the CEPC VM at this point in the setup process you will see the bootloader prompting, and counting down, “Hit space to enter configuration menu 5…”, 4, 3, 2, 1.

Hit the space bar during these prompts to enter the bootloader menu.

If you hit 1 for, [1] Show Current Settings, you will see that Main: Boot source and KITL device both list “DEC21140 at PCI bus 0 dev 10 fnc 0” for their current values.

Our CEPC VM bootloader is happy with its network adapter.

Load and Launch WEC OS Image

1. Hyper-V Manager, CEPC, Connect…

2. Start

3. If this is the first time you’ve run this VM, hit the space bar during “Hit space to enter configuration menu” prompts to enter the bootloader menu and verify, set up, and save the following bootloader settings, else skip to step 8.

4. [4] Network Settings, [1] Current Settings
Network:
KITL state: enabled
KITEL mode: interrupt
DHCP: enabled
IP Address: 0.0.0.0
VMINI: enabled

5. [0] Exit and Continue

6. [7] Save Settings
(Should see: Selection: 7, Current settings has been saved)

7. [0] Exit and Continue (to boot from Platform Builder)

8. Should see DHCP Discover Message attempts, followed by DHCP assigning an IP address, followed by “BOOTME” messages

9. From WEC7 or WEC2013 Platform Builder, OS Design based on the WEC CEPC BSP, sysgened and fully built into an OS image, Attach Device

10. Should see, PC-00155D01A407, or something like that, select and click Apply

11. Download of OS image from Platform Builder to CEPC VM target should commence immediately.

Now you can debug and develop your WEC drivers and applications on your host PC using Visual Studio and/or Platform Builder with the CEPC VM you created.


Slides and Scripts from SharePoint Saturday Cincinnati 2015

$
0
0

Originally posted on: http://geekswithblogs.net/bjackett/archive/2015/10/11/slides-and-scripts-from-sharepoint-saturday-cincinnati-2015.aspx

   Thank you to all of the attendees at my “Running your Dev / Test VMs in Azure for Cheap” presentation at SharePoint Saturday Cincinnati 2015 (or as the locals liked to call it ScarePoint Saturday Spookinnati due to the Halloween theme.)  The slides and scripts from my presentation are below.  Enjoy.

 

PowerShell Scripts

 

Slide Deck

 

      -Frog Out

ISS Astronauts contact with Corpus Christi Catholic School

$
0
0

Originally posted on: http://geekswithblogs.net/raysmithequip/archive/2015/10/12/167479.aspx

Here's a video posted Chris Brushie KB3TQO of the ISS Astronauts contact  with Corpus Christi Catholic School 10.06.15, at the Capitol theater in Chanbersburg PA.   CVARC coordinated the contact Via amateur radio links with the ISS .  You'll hear me right at the very end of the video as I introduce Brad and Chris.   This was a once in a lifetime event and the students got 18 questions answered.   The teachers, NASA folks, and especially the  Students really were all AWESOME!!

Special Kudos to Astronaut Kjell Lindgren, KO5MOS and
Corpus Christi Catholic School teachers Amanda Blough and Amy Fetterhoff!!!!   Ditto for our ARISS Mentor  John Kludt, K4SQC

If you know of any teachers looking to go above and beyond for their students, send them to the  http://www.ariss.org/ page so they can check out the submit proposals page to get started.  The Cumberland Valley Amateur Radio Club is an ARRL affiliated Special Services Club and we will be happy to advise anyone on what it takes to get the ball rolling.  

CVARC President  
Ray Smith N3TWU


Workaround for No Locations Available with Azure DevTest Labs

$
0
0

Originally posted on: http://geekswithblogs.net/bjackett/archive/2015/10/12/workaround-for-no-locations-available-with-azure-devtest-labs.aspx

   In this post I’ll walk through a workaround to the “There are no locations available. You may not…” error when trying to provision a new instance of Azure DevTest Labs in the current preview (as of 2015/10/12).

 

Problem

   A few weeks ago during AzureCon 2015 there was an announcement that the new DevTest Labs offering was available in preview.  For those of you unfamiliar Dev Test Labs allows an administrator to set quotas for money used per month, size of VMs available, automatic shut down times for VMs, and more.  I immediately tried to register and followed the instructions to wait 30-60 minutes.  Later on I saw the DevTest Labs section available in the Create blade (note this requires using this link from the above page which as far as I can tell includes the “Microsoft_Azure_DevTestLab=true” querystring parameter to “enable” the DevTest Labs pieces in the UI).  When I attempted to create a new instance of a DevTest Labs I ran into an error stating that “there are no locations available”.

   I waited a little while longer and refreshed the browser but still had the same issue.  Today even days / weeks later no change and still the same error.  Thankfully I ran across a support forum post that led me in the right direction to resolve the issue.

Can’t create new lab in Azure DevTest Labs preview

https://social.microsoft.com/Forums/en-US/0ad3218b-6d18-44ac-915c-5ccd15b14f33/cant-create-new-lab?forum=DevTestLabs

 

Workaround

   As a fellow forum poster “runninggeek” mentioned there was an issue with the Microsoft.DevTestLab provider in my subscription.  Others who registered after me did not have this problem as a problem with the registration backend was fixed shortly after this announcement went out.  Here is the PowerShell script I ran to workaround my issue.  You can also download from my OneDrive.

 

 

001
002
003
004
005
006
007
008
009
010
011
012
013
014

Switch-AzureMode AzureResourceManager
Add-AzureAccount

# if you have multiple subscriptions tied to account may need to select specific one for below command
Get-AzureSubscription | 
Select-AzureSubscription
Unregister-AzureProvider
 -ProviderNamespace Microsoft.DevTestLab

# confirm that provider is unregistered
Get-AzureProvider -ProviderNamespace microsoft.devtestlab

Register-AzureProvider -ProviderNamespace Microsoft.DevTestLab

# confirm that provider is at least registering (mine took 1 minute to fully register)
Get-AzureProvider -ProviderNamespace microsoft.devtestlab

 

   Essentially you need to connect in Azure Resource Manager mode and unregister the Microsoft.DevTestLab provider.  Wait until the provider is unregistered and then re-register the provider.  Close all browser sessions logged in to the Azure Portal and re-launch it from the appropriate link.

 

Conclusion

   Hopefully very few people ran into this issue as it appears to be caused by the timing of when you registered for the Azure DevTest Labs preview.  Thanks to “runninggeek” for pointing me in the right direction to resolve this.  I provisioned an instance of DevTest Labs this afternoon and starting to pour through documentation and the initial set of offerings.

 

      -Frog Out

Member not found jsorter error

$
0
0

Originally posted on: http://geekswithblogs.net/BenAdler/archive/2015/10/13/member-not-found-jsorter-error.aspx

If you are using jsorter with compatibility mode enabled, you may receive a "member not found" error. I'm not entirely sure what causes this error, but it seems with some poking around online, I was able to fix this by changing the jquery.min.js code. For reference, I am using jquery version 1.11.3.

There's a piece of code:

mb = { set: function(a, b, c) { var d = a.getAttributeNode(c); return d || a.setAttributeNode(d = a.ownerDocument.createAttribute(c)), .d.value = b += "","value" === c || b === a.getAttribute(c) ? b : void 0 } },

That needs to be replaced with:

mb = { set: function( elem, value, name ) {

                                           // BUG related to ie10 compatibility mode not possible to set aria-parameters

                                           // The following problems were in Asynja: qtip didn't work, couldn't save patients by clicking save button and probably more.

                                           if (name.substring(0,4) == 'aria') {

                                                          return;

                                           }

                                           // Set the existing or create a new attribute node

                                           var ret = elem.getAttributeNode( name );

                                           if ( !ret ) {

                                                          ret = document.createAttribute( name );

                                                          elem.setAttributeNode( ret );

                                           }

                                           return ( ret.nodeValue = value + "" );

 

                             }

},


Can Sketch Noting Help You Keep Up With Technology?

$
0
0

Originally posted on: http://geekswithblogs.net/tmurphy/archive/2015/10/14/can-sketch-noting-help-you-keep-up-with-technology.aspx

WP_20151008_12_28_15_Pro_(2)[1]

I saw a post from someone who had created a Sketchnote of one of Scott Hanselman’s talks.  It was the first time I had seen this as an defined process for note taking.  As I get older and technology moves faster I am finding that it is getting harder to keep up and remember everything I need to learn.  This approach looks like it could be part of the solution.  This post documents my findings as I start to use sketchnoting.

One of the things I noticed as I learned more about sketchnotes is that I have seen this approach used before in industry books.  Specifically the “Head First” series had the look and feel of a sketchnote.  It seems the authority on sketchnotes is Mike Rohde.  I took the time to go through his workbook and videos.  I wish I had started with his handbook.

Not being an artist by any stretch of the imagination I wasn’t sure how difficult learning this skill would be.  I’m always up for a challenge though and I am finding it fun to exercise my drawing skills.  One thing that I have found that makes it easier is to Bing other people’s drawings to get ideas.

As has been mentioned in a number of articles I perused it is difficult to take notes on a tablet.  Most of this has to do with the touch control available in most of the current generation of devices.  Of course your mileage will vary.  I have tried drawing with both my Surface Pro and HP Spectre 360 with varying success.  I think newer generations like the Surface Pro 4 and Surface Book will do much better.

I took a look at Moleskines but I like having a bigger drawing surface.  I am finding that a 8 1/2 x 11 graph paper spiral note book is my sweet spot.  It makes getting your fonts right easier and better alignment.  The thinner the pens you are using for your fonts can also make things look better.  The biggest problem with some of these thin pens is that they tend to bleed ink making it harder to keep clean lines, but those are the trade-offs.

Overall I am enjoying the process and it is helping me remember more of what I am working on even if it is mostly just because I am writing more.  It takes more time, but the relaxation factor is also a benefit.  It also gives you an easy way to review what you have been learning which will re-enforce topics so I would say it indeed will help you keep up with technology if implemented regularly.  I would suggest that anyone give this a try and see if it fits your style.  Have fun.

Resx strings can be technical debt

$
0
0

Originally posted on: http://geekswithblogs.net/Aligned/archive/2015/10/14/resx-strings-can-be-technical-debt.aspx

I’m working on a project that requires translation using .Net’s Resx capabilities. This project is 2 years old and counting and our duplication and disorganization of our Resx strings are very apparent. This is a form of Technical Debt and we realized that it is time to pay some of it off before we’re in too deep. I just spent the morning replacing all of our required validation message from 28 different flavors of “X must be selected”, “Y cannot be empty”, “You need to set an email address.", ”Z must be chosen”. These message are shown directly under the input field in the form, so context already gives is meaning.

All of these strings have been translated to several languages already. Each time a new language is added we have to pay someone to translate. This is real money that has to be spent. It also complicates the validation code and HTML bindings.

The solution for us was to identify these strings, put them in an Excel spreadsheet, then have a developer (me) use the immensely helpful Resx Manager extension to remove these one by one, update code to replace them with one string in our CommonStrings.resx. requiredFieldMessage = “This is a required field.”.

We lose some specificity, but gain a lot.

Lesson Learned

Remember to think about the debt you are incurring on all aspects of your project and have a plan to pay it off when it occurs. We could have had some naming standards earlier for our Resx keys, avoided this waste of translating and duplication, and simplified code and process.

Now I’m off to simplify all of our loading messages.

Generate custom build numbers in TFS Build vNext

$
0
0

Originally posted on: http://geekswithblogs.net/jakob/archive/2015/10/15/generate-custom-build-numbers-in-tfs-build-vnext.aspx

By now, many of you should have had the chance to at least play with the new build system that was released in TFS 2015 and Visual Studio Online. Here is an introductory post I wrote about it when it entered public preview back in January.

Doing the basic stuff is very easy using the new build system, especially if you compare it with the old one, which is now referred to as XAML builds. Creating and customizing build definitions is just a matter of adding the tasks that you want to use and configure them properly, everything is done using the web interface that is part of the TFS Web Access.

Build Number Format

There are (of course) still some things that are not completely obvious how to do. One of these things is how to generate a custom build number for a build. Every build definition has a build number format field where you can use some macros to dictate what the resulting build number should look like.

image

The build number format can contain a mix of text and macros, in the above example I have used some of the date macros to generate a build number that uses todays date plus an increment at the end.

Generating a custom build number

Sometimes though you will have the requirement to generate a completely custom build number, based on some external criteria that is not available using these macros.

This can be done, but as I mentioned before, it is not obvious! TFS Build contains a set of logging commands that can be used to generate output from a task /typically a script) that is generated in a way so that TFS Build will interpret this as a command and perform the corresponding action. Let’s look at some examples:

##vso[task.logissue type=error;sourcepath=someproject/controller.cs;linenumber=165;columnumber=14;code=150;]some error text here

This logging command can be used to log an error or a warning that will be added to the timeline of the current task. To generate this command, you can for exampleuse the following 'PowerShell script:

Write-Verbose –Verbose “##vso[task.logissue type=error;sourcepath=someproject/controller.cs;linenumber=165;columnumber=14;code=150;]some error text here”

As you can see, there is a special format that is used for these commands:  ##vso[command parameters]text. This format allows the build agent to pick up this command and process it.

Now, to generate a build number, we can use the task.setvariable command and set the build number, like so:

##vso[task.setvariable variable=build.buildnumber;]1.2.3.4

 

This will change the build number of the current build to 1.2.3.4. Of course, you would typically generate this value from some other source combined with some logic to end up with a unique build number.

image

 

You can find the full list of logging commands at https://github.com/Microsoft/vso-agent-tasks/blob/master/docs/authoring/commands.md


Demystifying Regular Expressions

Coexistence between Exchange forests (without trusts…) -- Part 1: Conceptual

$
0
0

Originally posted on: http://geekswithblogs.net/marcde/archive/2015/10/19/coexistence-between-exchange-forests-without-trustshellip-----part-1.aspx

Imagine a scenario where you acquire a company in a different country and they don’t want to be absorb in your IT environment (because they don’t like it, have regulatory requirements that can’t be met or are just trying to be difficult) but you do need some fashion of coexistence between the two Exchange organizations… After all, you’re part of the same company now and people should be able to find them in the GAL and do free/busy look ups!

This is the scenario I recently got presented. So I went to work in my lab and documented everything…

Conceptual

image

Using the GALSync component from MIM 2016 (Licensed through windows server starting April 1st, 2015) we can synchronize the user objects from each domain in to contact objects and have them added to each Exchange global address list.

Scoped send connectors allow the two mail environments to send email directly to each other, and setting up internal relay domains allows email to stay internal, rather than going out over the internet.Group based delivery for the ‘toasterlabs.org’ domain will take care of the inbound internet mail flow.

 

I’m going to split this up in to a series and gradually post them, so as to not overwhelm you (and keep you coming back for more Smile. In addition it give me time to prepare for my next subject(s)!)

Coexistence between Exchange forests (without trusts…)  -- Part 1: Conceptual
Coexistence between Exchange forests (without trusts…)  -- Part 2: DNS Forwarders
Coexistence between Exchange forests (without trusts…)  -- Part 3: Preparing the UK Exchange 2007 environment
Coexistence between Exchange forests (without trusts…)  -- Part 4: Preparing the US Exchange 2010 environment
Coexistence between Exchange forests (without trusts…)  -- Part 5: Preparing the GALSync Server
Coexistence between Exchange forests (without trusts…)  -- Part 6: Installing the MIM 2016 Synchronization Service (GALSync)
Coexistence between Exchange forests (without trusts…)  -- Part 7: Creating Synchronization Agents
Coexistence between Exchange forests (without trusts…)  -- Part 8: Enabling Provisioning
Coexistence between Exchange forests (without trusts…)  -- Part 9: Synchronization!
Coexistence between Exchange forests (without trusts…)  -- Part 10: Configuring Free/Busy
Coexistence between Exchange forests (without trusts…)  -- Part 11: References

Windows 10 – Looking for updated HP drivers

$
0
0

Originally posted on: http://geekswithblogs.net/Plumbersmate/archive/2015/10/19/windows-10-ndash-looking-for-updated-hp-drivers.aspx

Decided to check that my HP Compaq Elite 8300 at work had all the latest drivers and updates on it for Windows 10.

The network card NDIS driver has now become unreliable with the latest Insider Preview (10565) and Hyper-V just wouldn’t play nicely with networking.

So I downloaded and installed the HP Support Assistant:

image

 

Clicking “Check for updates and messages” didn’t work even though I can still happily browse the Internet for content:

image

 

Maybe this is a side-effect of the current networking/Hyper-V problem so I removed the Hyper-V network adapter – bingo, the Support Assistant can now find the Internet.

image

 

Great. “No updates available”.

Weird – the only way to capture text of the Specifications is either with a screenshot or value-by-value through the tedious “Click to copy” method. No “save as” or “print” option. What idiot would design it that way?

image

 

Wait – the “About” for HP Support Assistant has a “Check for latest version”.

Even though I’ve just downloaded the Assistant, I’ve now been able to upgrade from 8.0.29.6 to 8.1.40.3.

Ah, NOW we’re in business…

 image

 

Fingers crossed my current network problems disappear.

Incoherent versioning

$
0
0

Originally posted on: http://geekswithblogs.net/Plumbersmate/archive/2015/10/19/incoherent-versioning.aspx

 

To check my PC is up to date on Windows 10 display drivers, I installed the “Intel Driver Update Utility”. After a quick scan, I was surprised to see I was apparently 5 major versions behind current!

Screenshot 2015-10-19 16.20.58

So the 10.x software is dated quite recently - August 17th, 2015.

Screenshot 2015-10-19 16.24.35

The 15.x software, though, is actually dated a few weeks earlier - 29th July 2015.

image

That makes a lot of sense. Both from Intel. Both drivers for “Intel HD Graphics”.

And once installed? I’ve downgraded to 10.18.10.4252 from 10.18.10.4276.

image

That was a waste of time.

Surprise, surprise, Windows Update has “Intel Corporation - Graphics Adapter WDDM1.0, Graphics Adapter WDDM1.1, Graphics Adapter WDDM1.2, Graphics Adapter WDDM1.3 - Intel(R) HD Graphics” waiting to install.

Debugging - Remote JVM

$
0
0

Originally posted on: http://geekswithblogs.net/rahul/archive/2015/10/19/debugging---remote-jvm.aspx

Most of the JAVA IDE like Eclipse or InteliJ allow configuring the application through Debug configuration to attach a remote JVM and debug the code libraries.

Let’s walk over this configuration and understand how this debugging works. Coming from MS Visual Studio background, I always missed the easy way of attaching your code by launching the Debug -> “Attach to process” and selecting the target process which is actually executing the library.

Under JAVA this needs to be done in two steps. First you will launch the application (debuggee) with some debug options to allow a client to connect to the socket on a given port. Next you need to launch the code library (debugger) under Java Debugger which will attach at the same port.

The debuggee is the process being debugged, under this process the back-end of debugger runs. The back-end communicates with the debuggee VM using the Java Virtual Machine Debug Interface (JVM DI), and it communicates with the front-end (debugger) over a communication channel using the Java Debug Wire Protocol (JDWP). This communication channel is the link between the front-end and back-end of debugger and it consists of two mechanisms – Connector and Transport.

The Java Platform Debugger Architecture (JPDA) defines three types of connector:

1. Listening Connector

2. Attaching Connector

3. Launching Connector

A java debugger client (front end) can be launched in “Listen” or “Attach” mode depending on whether the debugger client is started before the debuggee application or after. Alternatively it can also be configured to launch the debuggee process when the debugger is started. The front-end implements the Java Debug Interface (JDI) which is responsible for all communication about the actions taken in GUI to back-end and information from the back-end to GUI.

In establishing a connection between a debugger application and target VM, one side acts as a server and listens for a connection. At some later time, the other side attaches to the listener and establishes a connection. The connections allow the debugger application or the target VM to act as a server. The communications among processes can be running on one machine or different machines.

Most common way to start a debugger is to attach the debugger on the port used while launching the debuggee. Under this option the target VM (debuggee) will be the host for debugging and must be started first. Use the following options while invoking a debuggee and run the debugger in attach mode specifying the server IP and port of remote debuggee VM:

-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005

 

And if you want to host the debugging in debugger then use listening mode and launch the debuggee (after starting debugger) with following command specifying the debugger server IP and port:

-Xdebug -Xrunjdwp:transport=dt_socket,server=n,address=localhost:5005,suspend=y

 

Under JAVA 1.5 or later you may use -agentlib instead of “-Xdebug -Xrunjdwp”, for example:

-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005

 

And if you want to use shared memory as transport:

-agentlib:jdwp=transport=dt_shmem,server=y,suspend=n,address=javadebug

Introduction to Hadoop

$
0
0

Originally posted on: http://geekswithblogs.net/rahul/archive/2015/10/19/introduction-to-hadoop.aspx

Apache Hadoop is a framework which supports distributed processing and distributed storage of very large data sets on clusters of commodity computers.

The distributed processing is achieved through MapReduce and distributed storage is achieved through HDFS (Hadoop Distributed File System). The Hadoop framework is created especially for the clusters of computers so it is very much aware of the nodes, its network configuration and handles node/storage/network failures. YARN (NextGen MapReduce) further improves this framework by splitting the two major functionality of Job Management and Resource Management in two separate daemons.

MapReduce is a new abstraction that allows users to express the simple computations that process large amount of raw data, but hides the messy details of managing

a. Parallelization

b. Fault Tolerance

c. Data distribution

d. Load balancing

MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model.

This is very well explained in the white paper released by Google which inspired creation Hadoop:

http://static.googleusercontent.com/media/research.google.com/en/us/archive/mapreduce-osdi04.pdf

Hadoops Approach:

1. Take advantage of data locality. Splits data in blocks and stores them on different nodes (with replication to handle faults). The processing logic is then sent over to nodes to work on their local copy of data. It works well because processing logic can be expressed in much fewer bytes compared to actual data and so instead of moving data through network it transfers the processing logic itself. Moving Computation is Cheaper than Moving Data.

2. Designed to have Fault tolerance through fault recovery. Instead of designing a system to be fault safe it is designed to recover from faults (which only impacts the processing when there is a real fault). It is handled through replication and redundancy.

3. Split very large data sets in small manageable blocks and run the logic in parallel on multiple coordinated nodes. By controlling the input specification and output specification it makes it easy to stitch the processing logic of mapper and reducer together.

4. Take advantage of network proximity and minimize the usage of network bandwidth by replicated data within a data center and within a rack.

5. Let the nodes work independently and report status. Only coordinate the processing without introducing a bottleneck.

6. Simplify the logic expression and avoid requirements of complex logic to minimize iterations for performance optimization.

7. Easily scale horizontally as data size increases.

8. Handle very large data sets by pooling in the storage from multiple nodes. It can even handle data sets which cannot fit on one node.

9. Read chunks of data in parallel from multiple nodes and provide a very high aggregate bandwidth. The emphasis is on high throughput of data access rather than low latency of data access.

10. Use redundant execution to reduce the impact of slow machines, and to handle machine failures and data loss. Speculative Execution in Hadoop to handle stragglers.

The base Apache Hadoop framework is composed of the following modules:

1. Hadoop Common– contains libraries and utilities needed by other Hadoop modules

2. Hadoop Distributed File System (HDFS) – a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster

3. Hadoop YARN– a resource-management platform responsible for managing computing resources in clusters and using them for scheduling of users' applications

4. Hadoop MapReduce– a programming model for large scale distributed data processing.

Hadoop installation for Development

$
0
0

Originally posted on: http://geekswithblogs.net/rahul/archive/2015/10/19/hadoop-installation-for-development.aspx

The best way to learn about Hadoop is getting your hands dirty with real Hadoop programs and their execution. In order to do so we first need a Hadoop installation in local development box.

Steps to install Hadoop:

1. Download and install the Oracle Virtual Box

https://www.virtualbox.org/wiki/Downloads

2. Download and install Hortonworks Sandbox virtual appliance for VirtualBox

http://hortonworks.com/products/hortonworks-sandbox/#install

*Tip: If you get any error running Oracle Virtual Box, please check the BIOS settings to enable virtualization on the machine. And ensure you downloaded the correct installation matching your system configuration (32 bit vs 64 bit).

You can access you Hadoop installation using the browser based interface (Hue) at http://localhost:8888/

You can also SSH to the linux virtual box using credentials root/hadoop.

You can SFTP to the linux box using ip and port 2222.

Login into the SSH terminal and run command: hadoop version

This will print the version information of Hadoop installation.

 

[root@sandbox ~]# hadoop version

Hadoop 2.2.0.2.0.6.0-76

Subversion git@github.com:hortonworks/hadoop.git -r 8656b1cfad13b03b29e98cad042626205e7a1c86

Compiled by jenkins on 2013-10-18T00:19Z

Compiled with protoc 2.5.0

From source with checksum d23ee1d271c6ac5bd27de664146be2

This command was run using /usr/lib/hadoop/hadoop-common-2.2.0.2.0.6.0-76.jar

[root@sandbox ~]#


Writing your first MapReduce program

$
0
0

Originally posted on: http://geekswithblogs.net/rahul/archive/2015/10/19/writing-your-first-mapreduce-program.aspx

Before we delve into the IDE and start writing code lets understand a bit more about the MapReduce.

The MapReduce computation takes a set of input key/value pairs, and produces a set of output key/value pairs.

The user of the MapReduce library expresses the computation as two functions: Map and Reduce. Map takes an input pair and produces a set of intermediate key/value pairs. The MapReduce library groups together all intermediate values associated with the same intermediate key (k2) and passes them to the Reduce function. The Reduce function accepts an intermediate key (k2) and a set of values for that key. It merges together these values to form a possibly smaller set of values. Typically just zero or one output value is produced per Reduce invocation.

The Map

A map transform is provided to transform an input data row of key and value to an output key/value:

  • map(k1,v1) -> list<k2,v2>

That is, for an input it returns a list containing zero or more (k,v) pairs:

1. The output can be a different key from the input

2. The output can have multiple entries with the same key

The Reduce

A reduce transform is provided to take all values for a specific key, and generate a new list of the reduced output.

  • reduce(k2, list<v2>) -> list<v3>

The MapReduce Engine

The key aspect of the MapReduce algorithm is that if every Map and Reduce is independent of all other ongoing Maps and Reduces, then the operation can be run in parallel on different keys and lists of data.

Apache Hadoop is one such MapReduce engine.

Refer http://wiki.apache.org/hadoop/MapReduce for more details.

5-step parallel and distributed computation in MapReduce:

  1. Prepare the Map() input– the "MapReduce system" designates Map processors, assigns the input key value K1 that each processor would work on, and provides that processor with all the input data associated with that key value.
  2. Run the user-provided Map() code– Map() is run exactly once for each K1 key value, generating output organized by key values K2.
  3. "Shuffle" the Map output to the Reduce processors– the MapReduce system designates Reduce processors, assigns the K2 key value each processor should work on, and provides that processor with all the Map-generated data associated with that key value.
  4. Run the user-provided Reduce() code– Reduce() is run exactly once for each K2 key value produced by the Map step.
  5. Produce the final output– the MapReduce system collects all the Reduce output, and sorts it by K2 to produce the final outcome.

Refer http://en.wikipedia.org/wiki/MapReduce for more details.

For learning the first program in Hadoop we will follow the example from the book “Hadoop - The definitive guide”. The sample application finds the max temperature for each year by analyzing the weather data provided by National Climatic Data Center (NCDC).

Now download the Eclipse https://eclipse.org/downloads/ and configure it for JAVA development.

Tip: It is important to use the same version of JDK, which is available on your Hadoop system. Forgetting this may cause “Unsupported major.minor version 52.0” error when you try to run your program. You can find the Java version running on Hadoop system by running “java -version”.

Download the sample data from https://github.com/tomwhite/hadoop-book/tree/master/input/ncdc/all

Download Hadoop distribution from http://hadoop.apache.org/releases.html and setup your JAVA_HOME and HADOOP_HOME environment variables.

Tip: It is important to use the same version of HADOOP, which is available on your Hadoop system. Forgetting this may cause “Unsupported major.minor version 52.0” error when you try to run your program. You can find the Hadoop version running on Hadoop system by running “hadoop version”.

SFTP the files to a directory in your virtual linux sandbox, and run gunzip.

[root@sandbox rahul]# gunzip *.gz

[root@sandbox rahul]# ll

total 1740

-rw-r--r-- 1 root root 888190 Jun 3 12:37 1901

-rw-r--r-- 1 root root 888978 Jun 3 12:37 1902

Next create a java project and add external Hadoop library JAR files to its build path. The Hadoop JAR files are available under HADOOP_HOME/share/hadoop. Search for all *.jar files and add it in build path.

In general you will require JARS from common, hdfs, mapreduce.

Now add three class files MaxTemperature.java, MaxTemperatureMapper.java, MaxTemperatureReducer.java. Download the source code from the attached zip file.

The MaxTemperature.java is the MapReduce job which uses the mapper and reducer functions provided under corresponding java files, and schedules the map / reduce tasks to obtain the final output.

Export the project from Eclipse using “Runnable JAR option” and sftp to the Hadoop system (give it a name say MaxTemperature.jar).

[root@sandbox rahul]# hadoop jar MaxTemperature.jar

You can now run the jar using:

Since we did not provide the input output paths it will print usage help “Usage: MaxTemperature <input path> <output path>”. This confirms everything so far is good.

Tip: In case you get any error, create a simple java hello world program and try to run it with Java / Hadoop.

[root@sandbox rahul]# java -jar HelloWorld.jar

Hello World!

[root@sandbox rahul]# hadoop jar HelloWorld.jar

Hello World!

Copy your sample data to HDFS:

[root@sandbox rahul]# hadoop fs -mkdir /mapred/temp

[root@sandbox rahul]# hadoop fs -copyFromLocal 1901 /mapred/temp/1901

[root@sandbox rahul]# hadoop fs -copyFromLocal 1902 /mapred/temp/1902

Execute the Hadoop job (provide input and output files):

[root@sandbox rahul]# hadoop jar HadoopSample.jar /mapred/temp/1901 output

And you have now successfully executed a Hadoop job! The result will be available in the default directory of user. In my case it is /user/root/output and can be viewed with following command:

[root@sandbox rahul]# hadoop fs -cat output/part-r-00000

1901 317

Windows 10 – how to change Workgroup

Coexistence between Exchange forests (without trusts…) -- Part 2: DNS Forwarders

$
0
0

Originally posted on: http://geekswithblogs.net/marcde/archive/2015/10/19/coexistence-between-exchange-forests-without-trustshellip-----part-2.aspx

Creating conditional forwarders

Step 1: Open DNS manager
image

Step 2: Select Conditional forwarders
image

Step 3: Right click “Conditional Forwards" and select “New Conditional Forwarder…”
image

Step 4: Enter the dns name the forwarder is being created for
image

 

Step 5: Enter the IP address(es) of the DNS server(s) authorative for the domain.
image

Step 6: Select Store this conditional forwarder in Active Directory, and replicate it as follows:
image

image

 

Step 7: Click OK
image

Step 8: Repeat in each forest you want to replicate to/from

ArrowGreen

Coexistence between Exchange forests (without trusts…)  -- Part 1: Conceptual

ArrowGreenCoexistence between Exchange forests (without trusts…)  -- Part 3: Preparing the UK Exchange 2007 environment

Debugging PCI bus with Windows CE

$
0
0

Originally posted on: http://geekswithblogs.net/WernerWillemsens/archive/2015/10/21/debugging-pci-bus-with-windows-ce.aspx

This time I will write about something I don't understand :-( Or at least not completely.

Some time ago I was debugging a new Intel hardware platform (Adlink ETX-BT, Celeron J1900) and I experienced unexpected hangups during boot of the DEBUG version of my WINCE800 image. The RELEASE version never gave problems and started properly. After narrowing down the problem, I came across this piece of code inside PCIBUS.DLL (pcicfg.c line ±800)

...// Set the bus numbers & secondary latency// Need to set the subordinate bus as max for now, then write// actual number after found all downstream busses (*pSubordinateBus)++; Info.SecondaryLatency = (Info.SecondaryLatency > 0) ? Info.SecondaryLatency : pBusInfo->SecondaryLatency; // use global bus value if not defined for bridge BusReg = PCIConfig_Read(Bus, Device, Function, PCIBRIDGE_BUS_NUMBER); ((PBRIDGE_BUS)(&BusReg))->PrimaryBusNumber = (BYTE)Bus; ((PBRIDGE_BUS)(&BusReg))->SecondaryBusNumber = (BYTE)(*pSubordinateBus); ((PBRIDGE_BUS)(&BusReg))->SubordinateBusNumber = 0xFF;if (Info.SecondaryLatency) ((PBRIDGE_BUS)(&BusReg))->SecondaryLatencyTimer = (BYTE)Info.SecondaryLatency; PCIConfig_Write(Bus, Device, Function, PCIBRIDGE_BUS_NUMBER, BusReg); ...

While enumerating the PCI bridges, stepping over the yellow-marked source code line on the 2nd PCI bridge instance (Bus 00, Dev 1C, Fun 01, 8086 0F4A), the device "hangs". If you, however, skip this yellow-marked source code line (i.e. do NOT execute, Set Next Statement on the next source code line), there is no problem and you can continue debugging.

Bridge deviceBefore re-enumerationAfter re-enumeration
BusDevFunOrderSub busSec busPri busOrderSub busSec busPri bus
00 1C 00 1 1 1 0 2 1 1 0
00 1C 01 3 2 2 0 4 2 2 0
00 1C 02 5 4 3 0 8 4 3 0
03 00 00 6 4 4 3 7 4 4 3
00 1C 03 9 5 5 0 10 5 5 0

What is this code doing?
The BIOS already executed the PCI bridge enumeration and had filled in the SecondaryBusNumber and SubordinateBusNumber. PCIBUS.DLL in Windows CE actually re-executes the bus enumeration during startup (and finds the same enumeration order). While recursively enumerating the PCI busses, it needs to pass-through all PCI Configuration accesses to the secondary bus "under investigation". Therefore it needs to write - temporarily - SubordinateBusNumber = 0xFF, allowing all accesses to flow downwards the secondary bus. So you might expect this is not a problem. We are just filling in the same numbers. But it isn't in this particular case.

Bus Device FunctionVendorIdDeviceIdDescription
00 00 00 8086 0F00 Host PCI Bridge
00 02 00 8086 0F31 Display Controller VGA/8514
00 1C 00 8086 0F48 PCI/PCI Bridge
00 1C 01 8086 0F4A PCI/PCI Bridge
00 1C 02 8086 0F4C PCI/PCI Bridge
00 1C 03 8086 0F4E PCI/PCI Bridge
00 1D 00 8086 0F34 USB
00 1F 00 8086 0F1C PCI/ISA Bridge
00 1F 03 8086 0F12 SMB
01 00 00 11AB 6101 IDE controller
02 00 00 11AB 6101 IDE Controller
03 00 00 104C 8240 PCI/PCI Bridge
04 05 00 10EC 8139 Ethernet Controller (Realtek)
04 06 00 xxxx xxxx Other Bridge type
05 00 00 8086 1539 Ethernet Controller (Intel)

I admit that I do not fully understand the problem. My closest guess is that when SubordinateBusNumber == 0xFF is filled in, the PCI configuration accesses to the PCI based KITL NIC (Bus 04, Dev 05, Fun 00, 10EC 8139) get lost (redirected) through this bridge (Bus 00, Dev 1C, Fun 01, 8086 0F4A). Failing KITL, hence the "hang" of the device during debugging. Actually the device is still alive, but the VisualStudio Platform Debugger lost its connection with the NIC. It can also explain why there is no problem on a RELEASE build.
But why aren't the PCI configuration accesses lost when enumerating the first bridge (Bus 00, Dev 1C, Fun 00, 8086 0F48) instance then? And why haven't I seen this problem on other hardware platforms before? And does the KITL NIC driver need to access the PCI configuration registers in running condition? Puzzling...

At least skipping this line of code (during boot, but annoying) brought back my debugging possibilities on this hardware.

If anyone has a better explanation for this problem, or a solution how to fix it, feel free to let me know...

Useful references:

SharePoint Installation Guide for you

$
0
0

Originally posted on: http://geekswithblogs.net/ferdous/archive/2015/10/22/sharepoint-installation-guide-for-you.aspx

Recently I have taken a SharePoint Administration Training and SharePoint Installation demo was part of it. I am going to share this experience with you here so that it helps for your installation if you are new New in SharePoint

Assume Active Directory is already Installed and running in your environment as Active Directory is required for Classic Based Installation.

Create the following account in Active Directory

  • SharePoint Admin: SPAdmin

  • Farm Admin: SPFarm

  • Service Account: SPService (To run all SharePoint Service)

Step 0: SQL Server Installation

Step 1: Prerequisite Installation

Step 2: SharePoint Server Installation

  • Once installation of Prerequisite is done. Click on Install SharePoint Server.

  • Enter the product key and press Continue

  • Then Accept the terms of the license agreement(s) and press Continue.

Step 3: Run SharePoint Configuration Wizard

Next step is to Run the SharePoint Configuration Wizard. By clicking on Close button, Wizard will run automatically. If you want to run it later, you can find it in start menu. Find Details Installation Steps at

http://geekswithblogs.net/ferdous/archive/2013/07/18/sharepoint-2013-installation-guideline.aspx

 

PowerShell Script to Download SharePoint 2013 Pre-Requisites for Windows Server 2012

https://gallery.technet.microsoft.com/scriptcenter/Script-to-SharePoint-2013-702e07df

Viewing all 6441 articles
Browse latest View live