RSS feed for blog Linkin Skype Mail Me Twitter

Stickfight

Blog Category: linkedin

Missing A Conference

Over the last few weeks my social media stream has been filled with pictures and memories of times gone by for Lotusphere/Connect, these memories have been more than a little bit painful as they were all great times, a meeting time for great friends as well as for a community spirit that I’ve never met in any other technology, not Salesforce, nor Java, node or MongoDB,

This community still does exist even though it has shrunk over the last couple of years, however there is hope that with the recent changes and the hopeful reinvigoration by IBM/HCL as well as the constant work of such core community leaders of Gabriella Davis that it will return and maybe even grow, having basically opted out of the community for the last year or so through a mixture of client demands and ever-increasing work load, I am now reminded by these pictures and memories how important such a community is and not just to work and to business but to friendship and general sanity,

Long live the yellow bubble!!!


Missing a Conference 04

—^ My First Lotusphere, young fresh-faced and not fat


Missing a Conference 03

—^ Presenting for the first time at Lotusphere


Missing a Conference 01

—^ On the Piss with good friends


Missing a Conference 02

—^ The famous “all bloggers” photo

Salesforce: Same Code Different Triggers

In Salesforce the same bit of code can be triggered a lot of different ways and with calls to third parties there are different rules for the different ways of calling stuff.

For example take this bit of code, in it we are just passing a contact ID and it is going to go and talk to a third party web service, inside the “setUpRequest” it’s going to update the third party with the details of the Salesforce Contact and in return recive some bits and bobs from the third party to update the Saleforce side. Basic syncing between two parties

public class BlogFramework {

    public Static Void UpdateContactFromExternalWebService(String contactID) {
                Http h = new Http();
                HttpRequest request = setUpRequest(contactID);
                HttpResponse response = h.send(request);
    }           

}


we want this thing to happen at two different times:

  1. When a user manually updates a contact and then just saves it: we want the sync to happen instantly so the user can see immediately what’s happened and what’s been updated.
  2. On schedule: The content might not be updated in Salesforce at all, all changes might happen in the third party but the details still have to be kept up to date for reports and views etc.

So this bit of code has to be callable both from a Schedule and from a save Trigger

let’s take the save trigger first, as it is now it won’t work, you will get the error “Callout from triggers are currently not supported.” error if you try, normally you would just pop the annotation “@Future(callout=true)”1 at the top of this function and that would solve that but as you will see later on we can’t do that so what we’re going to do is have a little wrapper function that has the @future annotation and from that it’s going to call are real function.

@Future(callout=true)
public Static Void UpdateContactFromExternalWebServiceTrigger(String contactID) {
        BlogFramework.UpdateContactFromExternalWebService(contactID);
}   


we can then put that wrapper functions in our contact save trigger and everything will work perfectly

trigger ContactTriggerAllEvents on Contact (
    before insert,
    before update,
    //before delete,
    after insert,
    after update
    //after delete,
    //after undelete
    ) 
    {
        for(Contact cnt:Trigger.new)
        {
            BlogFramework.UpdateContactFromExternalWebServiceTrigger(cnt.ID); 
        }        
    }


Next comes calling it from a schedule, if we had put the @future annotation on the actual function this would fail because you cannot call a future function from a scheduled action but we dont have that issue now, what you DO have to do is bolt-on the “Database.AllowsCallouts” to your batch object as seen below

global class UpdateFromAzpiral implements Database.Batchable<sObject>, Database.AllowsCallouts{

    // Get all the contacts
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator([SELECT Id FROM Contact]);
    }

    // The executeBatch method is called for each chunk of objects returned from the start function.
    global void execute(Database.BatchableContext BC, List<Contact> scope){
      for(Contact c : scope){
         BlogFramework.UpdateContactFromExternalWebService(c.ID);
      } 
    }

    //The finish method is called at the end of a sharing recalculation.
    global void finish(Database.BatchableContext BC){
    }

}


Now your batch object will be allowed to do callouts.

Putting all these bits together means you can have a single function that calls out to third parties that can be triggered from either a Schedule or an ordinary Trigger.


  1. The “@Future(callout=true)” annotation basically means that the salesforce code does not stop and wait before doing other things this means that calls to third parties does not slow down the salesforce UI. 

Remote Desktop while away

In a few weeks I pop off on my first holiday for 2 years, for this trip I really do not want to take my laptop, this would seem odd as the darn things have been practically glued to me for the last 20 Years, but:

  1. Time to learn to take a break
  2. As a Member of LDCVia I need to learn to know when to share the work, and how to hand over.1
  3. Errr… welll… There are now countries that were once thought to be friendly, that have now shown a distressing habit of being hostile at their borders and now want to take your clients private data away and look at it for “Reasons” and I figured this would be a great dry run for having to travel to such a country again.

But I’m not insane, nor can I leave my clients, sooo I needed to retain access in some way so that I can support as and when it is needed. someone sort of cloud desktop that I can reach from a cheap laptop or tablet seemed like an obvious answer.

Requirements

  1. It must be able to run a form of VMWARE as each of my clients has to their own separate VM machine(s) with separate security.
  2. It must support Android (large and small screen) and Linux as clients.
  3. Must not cost a bleeding fortune.

Contenders

AMAZON Workspaces: This looked a perfect fit to start with and I use AWS for lots of other services including this blog, but the setup was a right faff was slow and cumbersome and then I realised that it would not allow VMWARE or any virtualisation, undaunted I though I would at least check out the performance, only discover that despite stating “just connects from anything” it only meant large android screens (not my phone) and not from Linux at All.. so in the bin it went.(shame really)

VMWARE Remote Console: Wince!!..I have to own to a bit of paranoia here my self, direct access to the Clients VM’s is just too much of a risk from external, it just stops there. sorry VMWARE, I tried to mentally run through the conversation explaining my actions to clients, and none of them went well.

I started to look at Citrix then got a grip and thought, “OH COME ON” its just one machine. Just get another machine load your VM’s on it and get a good secure remote client… cue another 30 mins wasted looking at online hosting and then a small local server and more swearing comes from the office as I realise “JUST USE YOUR BLOODY LAPTOP YOU TWIT YOU WON’T HAVE IT WITH YOU”…. fsss.

So it just comes down to a good remote connection software

The 3 that stood out were:

Logmein : From my point of view easy to discount as it does not support Linux guests but would have been discarded anyway as it has a small company feel which you again make it a difficult sell to clients

Team Viewer : Used by many of my clients and supports a lot of nice security features, but a bit on the pricey side.

VNC Connect : I was attracted to this as I use the free version for my Raspberry Pi’s, I also like the VNC standard and it has been security hardened by many a grumpy sys admin and dev over the years, the VNC Connect platform provided by Real VNC ticks all the boxes.

I was actually hard pressed to pick between Team Viewer and VNC Connect, on paper they provided all the features/platforms and security that I could want, but in the end VNC Connect won though partly because of price (its is £422 a year vs £384) but mainly the fact that the android viewer on team studio does not support the use of a mouse via Android.3

Security

OK, lets get the elephant in the room out of the way, how are we handling security.

Well VNC Connect is nicely paranoid about security so to get to my laptop now requires 2 logons both different 12 digit ones (1 to login to VNC Connect and 1 to get to my laptop ) then each VM is encrypted and requests a login appropriate to the OS used, then of course the normal password for each client to connect via VPN/Programs etc etc.

I ran that though potential questions via any of my clients and it came out OK.

Phyisical Setup

I have attempted to do this kind of thing before and obviously used remote software all of my career, and thus have sworn at my fair share of thin clients, phones, and tablets

Requirements

  1. Lighter than just taking my laptop
  2. Resolution of a decent Screen
  3. Good keyboard
  4. Good mouse
  5. Either not a silly price or very reusable.

I looked at Chrome books but frankly all in the price range were poo, Microsoft surface laptops were too expensive and I’m not in the IOS ecosystem, So an Android Tablet it is, that meant a Nexus or Pixel as few of the other vendors keep the security patches up to date, thankfully the Pixel C was on special offer (most likely due to being replace in the next month or so) that meant I could get a good tablet with a great screen and a very pleasant keyboard, paired with the new Logitech tracker ball I had something that was very usable thank you.


RemoteDesktop01.png

Testing

So I did 2 basic tests, performance and usability

Performance: I connected my laptop to a VPN in Japan, then tethered the tablet to my phone, then connected via the remote and sat down to work, it was totally usable , there was that slight lag on the mouse you get on any remoting software but no more than I get when I VPN into any clients network, and that was going to be the real test, using a VM via another VPN while remoting to the host PC, here I have to say I cheated like a devil, as my laptop is in the comms room of the office where it has access to the AWESOME least line we use (and is physically secure), so in fact it was actually faster and more responsive than normal, which was a more than pleasant surprise

Usability: Not bad, not perfect but not bad, it worked as well as any remote program, with a couple of extra qwerks, it does not re-size the client desktop as it is a genuine KVM rather than creating a new session, which is both good and bad and easy to work around, the other qwerk I’m still working on, is that the top and bottom bars for the Real VNC client triggers very easily and they don’t always retract cleanly without an extra click, I would like it if you could allocate a special gesture to the client to stop it working in modal form, the same as VMWARE client, perhaps a 3 finger swipe from the top or something like that

Conclusion

OK, I have been doing dry runs of leaving my laptop in the office and just taking the tablet and mouse home at night then working on that and I have to say I am now comfortable just going to Japan without the laptop, this setup works…. wish me luck.. :p


  1. I get told off about this about once a month. 

  2. They do do a cheaper version but I wanted the higher level encryption. 

  3. But I do have to give the Team viewer client credit for handling screen render better than VNC it would have been nice to get the best of both worlds, and I hope VNC improves on that front. 

MWLug 2016 Round-Up

So I’m back from my first mwlug jetlagged to hell and shattered, trying to answer the question that LDC Via always does after a paid conference “was it worth it”

For me the big revelation was the PSC presentation on migration strategy, not for the strategic partnership announcement with LDC Via which I of course knew about as I was there as LDC Via’s representative but for the “this is the truth of the world we live in, it is time to accept it” statement which has been an undercurrent of our yellow world for so long. This has lacked someone articulate enough to say it in a non IBM bashing way also Mr Head in one slide managed to sum up LDC Via’s position in the application structure better than the 4 tech that wrote it had managed in 2 years….

The Conference its self was excellent with some stunning content and very enjoyable from both a learning and a social angle, I could have done without the heat of Austin but the city its self is amazing and I can see why the residents are so proud of it.

The Session with Gab went well with a good attendance and no hecklers (security people can be weird), my individual session was the last of the day and thanks to the conference running a bit on the late side was not well attended (I went and checked if the attendance was as bad in the other sessions and it was), but EVERYBODY in the session ask questions and multiple people came up and thanked me again latter .. Weeeeeee

LDC Via was well received and people are starting to see what it can do and that is is not a threat either to IBM or to their jobs and ecosystem, more of a evolution of storage but I really should have brought more marketing stuff to give away :(

I also attended my first Penumbra meeting and ended up not doing hardly any of the full days work I had brought with me (I hate meetings) because the conversation was so interesting and engaging, I now understand why people join.

And thus the answer to the question at the beginning of the post is: “Yes, it was very much worth it”


IBM Champions

To Find The Perfect Office

I find with offices that I am more than a little picky, you think that all you want are a few simple basics but it turns out that none of the shared or small office providers want to provide what you want in the way you want it

I tried most of the leading communal office companies as a truly private office would be too expensive, and I would go stir-crazy, but every time there would be something fundamentally wrong with them. Faults ranged from one office advertising itself as a “dynamic shared environment” which turned out to be a lobby coffee shop >:( 1 through to places that were so keen to get you in that it felt like you had hired yet another project manager to be on your case. This happened four or five times, but I was determined to find a good work base 2, and my perfect office turned out to be a place called Purple Patch.

Within a day of trailing in to the building fresh from an outrage at a previous office provider, I determined to settle there like an ugly toad under a rock. If you want the low-down I recommend you go and look at their web site

Let’s walk through my must-haves:

24-hour access: This is a big thing for me, in fact THE big thing. Most shared offices close at 6pm, and you’re waiting at the door at 8am to be let in. However Purple Patch is really 24 hour access. They give you a bunch of keys and security fobs after checking your background, and then you can come in at any time, day or night. Perfect for those late night deadlines.

Impresses Clients: The place must look and feel good: a lot of shared offices have a slightly tired feel about them, a bit like airport lounges. At Purple Patch everything is bright and clean, and I thought at first it was because the place was new—but no, it’s been there for 15 years and simply has regular revamps. This means that you can bring clients on-site with a sense of pride, and its excellent location means that there are tons of great places to eat and drink within idle wandering distance.

Hard Lines: I know that everyone just lives on wifi, but not me. Wifi is fine and everything but I want a good speed, I want to go like the clappers, and so each desk has a network port in it that gives you 100Meg down, and the same up. This means that only individual clients’ VPN speeds slow me down.

Layout: I like open-plan, but not OPEN PLAN. Again, Purple Patch scores: the building itself is like a cheerful Gormenghast (or as Ben Poole said, “an eclectic second hand book shop”). Desks are grouped into bays and natural nooks, you don’t feel isolated, but similarly you are not forced to endure every word of a nearby loudmouth’s day.

Humanity: My office provider must be human. One of the Purple Patch clients has a small dog she sometimes brings in, and no-one howls (sorry) about rules. Parents bring in kids and I have never been disturbed by them (the layout helps with this and the fact there is a chill-out area with table-top football that gives kids something to do). No-one brings in jaguars and what can and can’t be done is decided on a case-by-case bases to keep everyone happy, rather than by following a soulless set of rules.

Now the little things:

Nice toilet roll holders: Don’t look at me like that. How many times have you been at a clients or any kind of office and had to deal with a toilet roll holder that looks straight out of Parkhurst prison? Something that does not allow more than 2 sheets to be pulled off before it jams and rips and you end up having a little tantrum, reaching in and ripping the offending roll out which is less than professional. It turns out that Purple Patch agrees and has proper toilet roll holders that hold multiple roles and that don’t damn well jam.

Proper coffee in proper places Purple Patch have different coffee in different places in the building (yes, yes tea as well!) If you want to wait at a machine while it goes “pluck pluck pluck” then fine there’s one of those. Personally I like the old style filter coffee pots and they are always kept topped-up to ensure optimal coding hyperactivity. I have watched Ben Poole consume his entire body weight in coffee within approximately ten minutes of arriving at the office.

… and that’s it. Really I am just relieved to have found an office I like and can do business in :-)

(BTW: I’m not getting paid or anything for this review!)



This is where I skulk. Your desk is static and you get a locking cupboard (there are personal lockers as well)



Lots of little drop-in meeting nooks



Coffee, COFFEE!!! (kept topped-up by the lovely lovely staff)



You would not believe that the meeting rooms are actually cheaper than horrible scabby chain franchise ones



A nice shower at an office, there’s an idea



Even LDC Via can enjoy a meeting there (and maybe the pub afterwards)


  1. Meaning I couldn’t leave my laptop unattended therefore had to take it with me every time I went to the toilet 

  2. I could not really use home as my work place as I need to meet with clients and clients of companies I am sub-contracting for. LDC Via work together a lot as well and frankly I start to lose productivity if I work from home more than around one day every two weeks. 

A Little Thing Done Right

Last week the ‘swag’ from being an IBM champion arrived and to my utter surprise it was just perfect, yes I had picked it from a catalogue and knew at least one of the items was from a brand I knew but that did not detract from the fact it represents something to me a bit deeper than just a give away to keep some evangelists sweet.

Branded stuff like this is really supposed to be used where clients can see it (on site idealy) but recent IBM marketing stuff has been of very poor quality, just somewhere to slap the logo on and hope for the best, the best example of this is the backpacks that were given away at recent IBM Connect events, they were not even worth taking home where as the 2005 and 2003 editions were still in use and treated as a fine vintage, who ever looked at the bags and decided to skip them for this year’s event was a wise person. Anyway the swag that just arrived represents in my opinion just what IBM is aiming at with their champions

  • High quality outsourcing: IBM obviously did not do it them selves but the picking and delivery for me was a simple and flawless exercise.

  • Best of breed: The backpack is Wenger, the notebooks were Moleskin, the t-shirt was Nike. Good competent brands, not too flash, but not some no name knock off that falls to bits.

  • To be seen in public: I am already using my stuff on site and with the same pride I would a MongoDB t-shirt or a LDCVia power pack.

I expect I am reading too much into this and it’s simply the result of a single individual doing their job very well, but even if that’s the case it’s a good example of a rejuvenating IBM











SalesForce for Domino Dogs 3: Web Query Save Agents

“WebQuerySave” / “PostOpen” and all its siblings have been a bastion of Domino and Notes developments since time out of mind and indeed they exist in a near identical form in Salesforce but just called Triggers

Just like Notes/Domino has different events that let code ‘Do Stuff’ to records e.g. “WebQueryOpen”,”OnLoad”, “WebQuerySave” etc etc, Salesforce has the same sort of thing, in their case they are broken down into 2 parts: Timings and Events

Timings: Before and After

Before: The Event has been started but the record has not been saved, this maps basically to the “Query” events in Domino.

If you want to calculate fields and stuff and change values in the record you are saving, this is the time to do that, you don’t have to tell it to save or commit the records as you normally would, it will run a save after your code is run.

After: The record has been saved, all field values have been calculated, then the After event is run.

If you want to update other objects on the basis of this record being created/saved do it here, you can’t edit the record you are saving, but lots of useful bits such as the record id and who saved it are available in the After event 1

Events: Insert, Update, Delete and Undelete

These are exactly what they say there, Insert is like a new document creation, Update is editing an existing document, etc etc

This then gives us a total set of different event types of:

  • before insert
  • before update
  • before delete
  • after insert
  • after update
  • after delete
  • after undelete2

Now you can have a separate trigger for each of these events, but I have found that this bites you in the bum when they start to argue with each other and hard to keep straight when things get complex, so I just tend to have one trigger for all events and a bit of logic in it to determine what’s going to happen when

Here is my Basic template I start with on all my triggers

trigger XXXXTriggerAllEvents on XXXX (
    before insert,
    before update,
    before delete,
    after insert,
    after update,
    after delete,
    after undelete) {
            if(Trigger.isInsert || Trigger.isUpdate) {
                if (Trigger.isUpdate && Trigger.isAfter) {
                   MYScriptLibarary.DoStuffAfterAnUpdate(Trigger.New, Trigger.OldMap);
                } else if (Trigger.isInsert) {
                    //Do some stuff here to do with when a new document being create, like sending emals
                }
            }
}


As you can see you can determine what event you are dealing with by testing for “.isInsert” or “.isAfter” and then run the right bit of code for what you want, again I like to keep everything in easy sight, so use functions when ever I can with nice easy to understand names.

In the above case, I want to check a field after there has been an update to see if it has been changed from empty to containing a value. you can do this with the very very useful ‘Trigger.New’ and ‘Trigger.OldMap’) as you can see below

public with sharing class MYScriptLibarary {

    public static void DoStuffAfterAnUpdate(List<XXXX> newXXXX, Map<ID, XXXX> oldXXXX) {

                for (XXXX curentXXXX : newXXXX) {
                    if(!String.isBlank(curentXXXX.MyField) && String.isBlank(oldXXXX.get(curentXXXX.Id).MyField) ) {
                        system.debug('OMG!!! MYField changed DO SOMTHING');
                    }
                }

          }

}


So we are taking the list of objects3 that have caused the trigger to run ie “Trigger.New”, looping through them and comparing them to the values in the Trigger.OldMap (which contain the old values) to see if things have changed.


So that is the theory over, you can see existing triggers by entering Setup and searching for “apex triggers”



BUT you cant make them from there, you make them from the Object you want them to act on.

Lets take the Case object for an example



In setup you search for case, and click on “Case Triggers” and then on “New”



That will give you the default trigger…. lets swap that out for the all events trigger I showed above



Better, then just click save and your trigger will be live. simples..

Now there is an alternative way to make triggers, and you do sometime have to use it when you want to create a trigger for an object that does not live in the setup, such as the attachment object.



You will first need to open the Developer Console up (Select your Name in the top right and select “Developer Console”), then select File —> New —> Apex Trigger



Select “attachment” as the sObject and give it a sensible name.



And now you can do a trigger against an object that normally you don’t see.

Final Notes:
  1. Salesforce Process flows can fight with your triggers, if you get “A flow trigger failed to Execute” all of a sudden, go look to see if your power users have been playing with the process flows.
  2. Make sure you have security set correctly, particularly with community users, both security profiles and sharing settings can screw with your triggers if you cant write or see fields.
  3. As always make sure you code works if there are millions of records in Salesforce. CODE TO CATER TO LIMITS.

  1. You know that pain in the butt thing you sometimes have to do with Domino when you have to use the NoteID rather than that Document ID before a document is saved this gets round that issue. 

  2. Yes eagle eyes, there is no “before undelete”. 

  3. You are best to handle to handle all code in terms of batches rather than the single document you are used to in Domino, we will handle batching in a later blog, but just take my word for it at the moment 

Presenting at MWLUG

Hooray!!!, I have been accepted to speak at mwlug this year

I will presenting 2 sessions

1) “The SSL Problem And How To Deploy SHA2 Certificates” with Gabriella Davis

This session went down well at connect and we are hoping that Austin will love this changed and updated version, Gab is awesome to present with.

2) “Salesforce for Domino Dogs”

Now If you saw this at Engage I urge you to come again as this is an evolving presentation that changes dramatically with each iteration (depending on presenters and the ever changing world of Salesforce)

  • Version 1: Balanced Architect (Engage 2016)
  • Version 2: Happy Evangelist (DNUG 2016)
  • Version 3: Rabid Developer <— This is the one I will be presenting

It will be my first trip out there and beside from presenting I will be manning the stand (The rest of the team are insisting I wear a shirt and everything).

P.S.

I’m looking for someone to room share/split cost with ( I sleep on the floor so there never seems to be a point to getting a room for myself ) …. I can provide references…

SalesForce for Domino Dogs 2: Scheduled Agents

Welcome to the second part of the Salesforce for Domino Dogs series. This one is a monster, but don’t worry we will be revisiting and clearing up some of the complex parts in other blog posts. What was a simple thing in Domino is quite complex in Salesforce and for a variety of very good reasons. So… scheduled agents.


Scheduled Agents: These little sods are the pain of many a Domino admin’s life. Personally I blame them for the lock-down of many a Domino server from the free-for-all that was so empowering to users, but sometimes there is no other way to get round limits or deal with certain triggered activities.

In Salesforce scheduled processes are a bit more complex than you might be used to, and this is not just a Salesforce thing, but a cloud thing—no cloud provider wants their platform just churning along in the background eating up cycles and I/O time.

So let’s break it down:

  1. The code that does stuff
  2. The scheduled task that the code sits in
  3. The schedule itself

1) The Code

So this CAN just be any bit of Apex you want, but most of the time you will actually end up using batch apex. Batch Apex is a whole set of articles in its own right, but in this case it’s just a way of getting round the Apex limits.

… hmmm that does not help. OK let me explain:

You know how with Domino scheduled agents, they will only run for so long before the agent manager shuts it down? This is to stop you writing rubbish code that screws up the system. Apex has a load of limits just like that, and the one that hits quite often is the limit that you can only send 10 emails using Send() in a given transaction (you can send 1000 bulk email per day). To get round this limit you have to “batch”, or break up your code into chucks. In Domino this would be like saying we want to process a whole view’s-worth of documents, but in chunks of say five documents at a time.

An empty bit of batch apex looks like this:

global class NotifiyAllUsersInAView implements Database.Batchable<sObject> {

    // The start method is called at the beginning of the code and works out which objects this code is goign to run agains.
    // It uses an SOQL query to work this out
    global Database.QueryLocator start(Database.BatchableContext BC){

    }

    // The executeBatch method is called for each chunk of objects returned from the start function.
    global void execute(Database.BatchableContext BC, List<Contact> scope){

    }

    //The finish method is called at the end of a sharing recalculation.
    global void finish(Database.BatchableContext BC){

    }

}

Let’s take it apart. First we will use the “start” function to get the list of objects we want to work through, so we take the empty function:

    // The start method is called at the beginning of the code and works out which objects this code is goign to run agains.
    // It uses an SOQL query to work this out
    global Database.QueryLocator start(Database.BatchableContext BC){

    }

… and add a search to say get all “contacts” in Salesforce. We only need the email address for these contacts1 so we add that as one of the fields which it gives us:

    // The start method is called at the beginning of the code and works out which objects this code is going to run against
    // It uses an SOQL query to work this out
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator([SELECT Id, Email FROM Contact]);
    }

Next we want the empty “execute” function which will do whatever we want with each chunk of objects it is sent:

    // The executeBatch method is called for each chunk of objects returned from the start function
    global void execute(Database.BatchableContext BC, List<Contact> scope){

    }

So in this horrible bit of code, the chunk of objects is passed in a reference called “scope” — we are then just iterating the objects and sending an email for each contact (you can see the email address stipulated in the “start” being passed in using “c.Email”):

    // executeBatch method is called for each chunk of objects returned from the start function
    global void execute(Database.BatchableContext BC, List<Contact> scope){
      for(Contact c : scope){
          Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
          String[] toAddresses = new String[] {c.Email};
          mail.setToAddresses(toAddresses);
          mail.setSubject('Another Annoying Email');
          mail.setPlainTextBody('Dear XXX, this is another pointless email you will hate me for');
          Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });
       }
    }

Finally we need an empty “finish” function which runs when all the batches are done:

    //The finish method is called at the end of a sharing recalculation.
    global void finish(Database.BatchableContext BC){

    }

So let’s send a final email notification to the admins:

    //The finish method is called at the end of a sharing recalculation
    global void finish(Database.BatchableContext BC){
        // Send an email to admin to say the agent is done.
        Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
        String[] toAddresses = new String[] {emailAddress};
        mail.setToAddresses(toAddresses);
        mail.setSubject('Agent XXX is Done.');
        mail.setPlainTextBody('Agent XXX is Done.');
        Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });
    }

Put it all together and you get:

global class NotifiyAllUsersInAView implements Database.Batchable<sObject> {

    // String to hold email address that emails will be sent to.
    // Replace its value with a valid email address.
    static String emailAddress = 'admin@admin.com';

    // The start method is called at the beginning of the code and works out which objects this code is goign to run agains.
    // It uses an SOQL query to work this out
    global Database.QueryLocator start(Database.BatchableContext BC){
        return Database.getQueryLocator([SELECT Id, Email FROM Contact]);
    }

    // The executeBatch method is called for each chunk of objects returned from the start function.
    global void execute(Database.BatchableContext BC, List<Contact> scope){
      for(Contact c : scope){
          Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
          String[] toAddresses = new String[] {c.Email};
          mail.setToAddresses(toAddresses);
          mail.setSubject('Another Annoying Email');
          mail.setPlainTextBody('Dear XXX, this is another pointless email you will hate me for');
          Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });
       }
    }

    //The finish method is called at the end of a sharing recalculation.
    global void finish(Database.BatchableContext BC){
        // Send an email to admin to say the agent is done.
        Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage();
        String[] toAddresses = new String[] {emailAddress};
        mail.setToAddresses(toAddresses);
        mail.setSubject('Agent XXX is Done.');
        mail.setPlainTextBody('Agent XXX is Done.');
        Messaging.sendEmail(new Messaging.SingleEmailMessage[] { mail });
    }

}

So now we need to call this code

2) The Scheduled “Agent”

The code we have just written won’t run in a schedule on its own, we need to wrap it up in a bit of code that can run on a schedule and decide how big the chunks will be. In this case they can’t be more than 10 as we will hit the Apex limits for sending emails. An empty schedule wrapper looks like this (I have called mine ‘Scheduled_Agent’ but you can call it anything):

global class Scheduled_Agent implements Schedulable{
    global void execute (SchedulableContext SC){

    }
}

Now let’s create a new instance of the batchable code we created in section 1, tell it we want it to run in batches of 5 records or objects, and tell it to execute.

global class Scheduled_Agent implements Schedulable{
    global void execute (SchedulableContext SC){
      Integer batchSize = 5;

      NotifiyAllUsersInAView batch = new  NotifiyAllUsersInAView();
      database.executebatch(batch , batchSize);
    }
}

Code bit all done!

3) The Schedule

Now it comes time to actually schedule the code to run at a certain time, you can set this up via the user interface by going into Setup, searching for “Apex Classes”, and selecting the result:


Select “Scheduled Apex”


As you can see, the options are limited to, at most, a daily run—you can’t specify it to be any more frequent. However, we need to run to more often than that2.

First open up your developer console, by selecting your name on the top right and picking it from the drop-down.


Now open up the “Execute Anonymous Window” from the debug menu.


You can now run Apex code manually, and as such you can schedule jobs with a load more precision using a Cron String. In this case we want to run the agent every 10 mins within the hour, so we create a new instance of our “Scheduled_Agent” scheduled class and schedule it appropriately:


Click “Execute” and you can see the jobs have been scheduled. It should be noted that you can only have 100 of these in your org and this uses up 6 of them, so some planning would be good.


And there you go, scheduled agents. Let the legacy of horror continue!


  1. When you get an object via SOQL, you ask for all the fields you want, this is not like getting a Notes Document you don’t just get access to all the document fields automatically. 

  2. Well we don’t but you just know someone will demand it to be sent more often. 

SalesForce for Domino Dogs 1: Profile Documents

Following on from the Initial Session “Salesforce for Domino Dogs” that Paul Mooney and I did at Engage and a modified version of which that has just been presented at #DNUG I figured that a series of dev articles on how you would do a job in Salesforce that you had always taken from granted in Domino might be a good idea, because:

  1. It would give an honest way of comparing features between the 2 systems shorn of the hype/marketing/platform bashing, that frankly gets on my thungers from both sides.
  2. It will hopefully help people trying to integrate the 2 systems.
  3. As IBM are one of the largest Salesforce consultancies in the world, it is something a champion should do.
  4. The Salesforce community is very short on this kind of thing given its size in comparison to traditional IBM communities and with people like René in it I want to try and help improve it.

These articles are not in any order and are not meant to represent any form of training guide.

So lets get started, first up: Profile Documents!!


In Domino you tend to store config settings for an app in a profile document1 for all your one off settings

To get the same features in Salesforce you use a ‘custom setting’ which does exactly the same job and has one huge advantage over using a normal Salesforce custom object that could do the same job.

(It should be noted that Domino profiles are nothing like Salesforce profiles)

To create a custom setting, go into Setup and search for “Custom Settings”



Click on the “New” button, and fill in some sane details, for normal configs i.e. stuff you would use system wide, select a setting type of “List”, if you want to use them for things like default values in fields and formulas then select “Hierarchy”



Click ‘Save’ and you now have a custom setting object, you can add one or more fields to it just as you would any other object in Salesforce



Added all the fields you want?, lets put in some data. if you go back to the list of custom settings you will see that you now have a “manage” link, click this



Then click “New”



Fill in the fields just like you would on a normal form, if this is a Setting there is only going to be one of, I tend to give it the same title as the name of the object to keep things simple, in this case “General Settings”, if you are going to use it multiple times than give it a name that will make sense in the context of your code



All done. now we can use the setting in our code and see the reason why we would use them vs. a normal custom object.

As you can see from the code below, you don’t have to use a Select statement, which means getting the settings wont count against your apex limits, HORRAY!!!

You just have to create a new instance of the setting object then you can just “getInstance” with the Name of the object you created to get the document back.

General_Settings__c generalSettings = General_Settings__c.getInstance('General Settings');
String thirdPartAppURL = '';
if (null != generalSettings) {
     thirdPartAppURL = generalSettings.Third_Part_App_URL__c;
    
} 


Simples..


  1. Well you are supposed to, but you never do thanks to the evil caching and the fact the are are a sod to just copy around, so you end up with a config document and a view you store them in. 

Latest Blogs