Happenings

Passed Field Service Lightning Certification

I passed one of Salesforce's newest certifications, Field Service Lightning. And, boy, was this one weird. Basically, if you're thinking you want to pass this, I would wait a year. Work on something else. Because, unlike the other 7 exams I've taken and passed (recently passed the Platform Dev II exam), there is no single study guide that covers everything. Basically, type "Field Service Lightning" into Google. Boom, there's your study guide. You just have to read everything anywhere about it. And a lot of the Salesforce publications have misspellings, missing words, and other editing errors that kind of say "this is rushed". It is an AWESOME product, and I look forward to building with it, but the certification is freshly fallen snow. Let some other people pack the trail down before you take it on.

Read full happening...

Populate StandardSetController from Salesforce Data API

Requirements:

  1. Retrieve > 50,000 sObjects for a table
  2. Page can't be in "Read Only" mode
  3. Object > 2 million unindexed records (can't use SOQL)

This solution uses chained calls to the Salesforce Data API that parses the returned JSON into sObjects

https://gist.github.com/leehildebrand/9353241e617b3670c72e763fa8151a9a

Read full happening...

Pass multiple, strongly typed variables from Process Builder & Flow to Apex (and maybe replace some triggers while you're at it)

Triggers. They're fast, they're easy, and if you follow a framework, you can have a lot of control over how they fire. But they can also be a nightmare. If something's really, really broken, you can get yourself in a situation where you can't make changes to your Triggers in production quickly. Even if you can, versioning still requires a deployment from your source control (assuming you have one). In those cases, wouldn't a deployable, point-and-click tool that natively keeps its past 50 iterations for instant roll-back be better? It would. And so, to the rescue comes Process Builder!

If you are handling "after" logic for inserts or updates, it is possible to bypass triggers entirely, and set up a relationship between a Process or Flow and the Apex handler class. "But wait!", says the reader, "when I use Process Builder, all I can send is an array of a wrapper class objects. How could I use that send get records to their handlers?" To that reader I say: Welcome to Invocable Variables!

Most readers will know it's a good idea not to handle logic in triggers. Instead, review the nature of the change and the Trigger Context, then hand records off to appropriate Apex methods for more complex duties. For "after" logic on inserts or updates, the exact same handoff can be duplicated from a Process (or Flow) to a class in an optimally bulkified manner because you can send whatever you want to an invocable method in an Apex class so long as you define a wrapper class that contains those variables. You may just want to send Apex a variable to be used in an API request or something. But, if you want, you can even use those variables to reconstitute an array of the sObject that kicked off the process (thereby mimicking the handoff from Trigger to handler class).

This works whether you call Apex from Process Builder, or first call a Flow from Process Builder then call Apex from the Flow.

Let me demonstrate. The example below would eliminate the need for "after" handling in a trigger for some Accounts and Opportunities by:

  1. Using a process to call Apex to handoff newly created Accounts.
  2. Use that same Process to call Flow.
  3. The flow makes Opportunities related to the Accounts that kicked off the process.
  4. The flow will then send those Opportunities to Apex.
  • Start with a Process than handles events on Accounts:

  • Add criteria diamonds. Since Summer 16, you've been able to execute actions on more than one criteria. By doing so, you can call multiple Invocable Methods in your Apex classes depending on what's happened to your record(s) (important note: only one invocable method is allowed per class. If you're going to send parameters to multiple methods, you'll need multiple classes). Handling these evaluation in criteria diamonds can take the place of evaluative logic that used to be coded in Apex. For our demo, we'll restrict the first diamond only to execute actions on Account inserts by setting its evaluative formula to ISNEW(). We won't specify any criteria for the 2nd diamond (we'll be handling its handoff criteria in Flow designer):
  • Now we need to specify some actions. Each diamond will have one action. For the first criteria diamond's action, let's call Apex directly from Process Builder. To do so, we need to have a class with an invocable method. If we've not specified any parameters for our invocable method, we won't see an option to send parameters in Process Builder:

  • So, let's create a class that looks like this.

https://gist.github.com/leehildebrand/2249733d3317e0ee9845d2d31932e631

  • Now, we have the option to set the variables of an AccountParameter wrapper class object for each Account being created:

  • Our next criteria diamond sends Accounts to this simple flow:

  • The flow starts by receiving data from Process Builder to set its variables:

  • Use the AccountId and AccountName variables sent to the flow to create an Opportunity (where {!OppName} is a simple formula {!AccountName} + ' Opp':

  • The flow then inserts the Opps and sends them to this invocable method:

https://gist.github.com/leehildebrand/b082ce47c574d43c6592f1079841b0b6

  • So, running the following anonymous Apex to insert two Accounts:

insert new Account[]{new Account(Name='Test Account 1'),new Account(Name='Test Account 2')};

...brings back these debug statements:

****** accounts: (Account:{Id=001i0000027njZKAAY, Name=Test Account 1}, Account:{Id=001i0000027njZLAAY, Name=Test Account 2})
****** opps: (Opportunity:{Id=006i000000iDWy9AAG, Name=Test Account 1 Opp, Amount=1.0, CloseDate=2017-02-24 00:00:00}, Opportunity:{Id=006i000000iDWyAAAW, Name=Test Account 2 Opp, Amount=1.0, CloseDate=2017-02-24 00:00:00})

And there we go! Without using a trigger, we were able to:

  1. Insert Accounts
  2. Send them to a Apex handler method
  3. Launch a flow and create Opportunities for those Accounts
  4. Send the Opportunities to their own Apex handler method

Obviously, there are plenty of situations where triggers are required, but hopefully this illuminates at least a couple instances where clicks can replace code.

Read full happening...

Return Records From Salesforce Analytics API (no SOQL)

SOQL is the best. I mean, Dynamic SOQL? C'mon, that's just cool. But SOQL has its problems. One big problem is big objects. I mean, more than 200k records big. You start querying that object, and you're probably get a message that looks like this:
System.QueryException: Non-selective query against large object type

Bummer. Maybe you can just add selective fields to your 'where' clause. Maybe you can turn the field you are filtering by into an external ID (so that it is indexed). Maybe you can contact Salesforce and see if they feel like making the field you are filtering by a custom index. And maybe you can't, or maybe they won't. Then what?

There's a couple good options left, but the one I want to talk about is leveraging the Analytic API to return your results. For this you are going to need:

  1. At least one report that uses the fields you were trying to put in your 'where' clause as its filters
  2. A Custom Setting (list type) to keep track of your report's name & Ids (which can change on deployment)
  3. Apex that sets a report filter and processes the result

Before we being, and important disclaimer: You cannot call the Analytics API from a trigger, but that's OK. You can replicate all the same functionality with other tools. If you are running your Apex class based on changes to your data that would typically cause a trigger to fire, you need to call that Apex class from Process Builder instead (I will be making a blog post soon about how to pass multiple, strongly typed parameters from Process Builder to Apex in bulk).

Here's an example. Let's say I have the following report:

A few results show up where the Account Number is blank. But where's the Account named "Demo 1"? I want Account Demo 1! I know it is Account Number "12345". So I query for it: [SELECT Name FROM Account WHERE AccountNumber = '12345']. But, uh-oh! There's 500,000 Account records, and Account Number is not indexed. So I get an error. And, let's just say there's no good way to make the query selective. No worries, because what I'm going to do next is add the name and ID of the Report we just made to a Custom Setting, like so...

Now we're getting somewhere. At this point, I can run Apex that will set the report filter, then process the results and end up with a list of Account, just like if I had run a query:

https://gist.github.com/leehildebrand/85c76a1e4b40adc6578ac6048c7faad5

Notice a few things:

  1. I had to put the name of the Custom Setting record that holds our Report information into lines 2 & 10.
  2. Line 5 is hardcoded for demo purposes. In an actual class, I would follow my "10 Rules of Apex"  rule #2 (NO hardcoding values) and use a variable to provide the Reports.ReportFilter.setValue() parameter.
  3. When we set the filter this way, it does not change in the saved report definition. If we went to the point-and-click the UI and had a look, the report would still have the null value for the "Account Number" filter as it does in the picture above.

Assuming there is one Account that has an Account Number of '12345', the debug statement on line 17 will return:
(Account:{Id=001i000001d7uKzAAI, Name=Demo 1})

There's the "Demo 1" Account Name we were looking for! In a neat little array, just a like a SOQL query returns.

After this, we can keep on writing the class to use this array, safe in the knowledge we can keep adding records to our Account object, and we'll be able to use the Analytic API to access them in our Apex code!

Read full happening...

Use Schema Global Variables for Field Labels

VisualForce is on its way out. Lightning is just about mandatory, and JavaScript all the things! Except, there's still about 9,000 trillion VisualForce pages out there to maintain. To help reduce the amount of time you spend going back and updating field labels, use this trick to dynamically get the correct label for the field in question (so if an admin changes the label, it automatically reflects the change on the VisualForce page). Hot dog! Now, you can spend more time writing those Lightning Components.

Here's an example of a page where two developers talk about the same field: Account.Demo_Field__c, but one uses static text and one uses $ObjectType to create a schema reference to the label...

https://gist.github.com/leehildebrand/9e9e1a57331adddf5d996fad8ef69cea

When rendered, this code looks like...

So far so good. But what if some ambitious admin changed the label for Account.Demo_Field__c so that it no longer matches the API name (kind of a no-no, but it happens)? If they change it to "The New Name for Demo Field" the page will now display:

Developer 1 has some work to do. They need to go back and update their page to refer to the new Account.Demo_Field__c label correctly. What about Developer 2? They're done! The schema variable they used automatically reflected the label change, so they can go on their merry way and work on something else! So be like Developer 1 by learning more about using $ObjectType here.

Read full happening...

Lee's 10 Rules of Apex

I've seen a couple versions of rules developers feel are core to good Apex code. I'd like to throw these into the ring as concepts that have been helpful to me, and that I deviate from at my own peril:

  1. Write your tests FIRST
  2. NO hard coded values
  3. Use a test data factory
  4. Test for predictability, not coverage
  5. If it doesn't assert, it's not a test
  6. Use a custom error class
  7. Don't repeat yourself
  8. JS > VF
  9. Don't repeat yourself
  10. Standard is better than custom (use everything you pay for)

For a different perspective, here's a DreamForce session that I absolutely agree with.

Read full happening...

DEV501 Study Guide

I just passed the DEV 501 multiple choice for Advanced Developer certification. Man, was that a weird test. As opposed to all the cool things Salesforce can do with Heroku Connect or the Tooling API, it wanted to really drill down into 2009 functionality like VF Template syntax. It doesn't really matter why, anyone reading this just wants to know "so what did you have to learn that surprised you". Well, here you go, I am publishing notes I took from countless study guides, training videos, and documentation research, all compressed into 8 easy pages. Good luck to anyone trying to become more proficient on the platform!

DEV 501 Study Guide

Read full happening...

Create a configurable InboundEmail Service

The Force.com Messaging.InboundEmailHandler interface is cool.  Read more about it here. It allows you to send email to a particular address, then take that email apart and use it's pieces to interact with any of your objects in any way you can envision using Apex code. The InboundEmail service can create new records, which can fire triggers, or start trigger-ready flows. The InboundEmail service can also send API messages, making for quick points of integration. So why isn't it more popular? Well, because the content of email messages are notoriously hard to predict.

Think of it this way: The InboundEmail service will always give you a Messaging.InboundEmail object (we will call that "email") that you can you can always pull the subject into a field using email.subject & email.plainTextBody. But what if you had a template based email, and wanted a field called "Applications", and you planned to get the names of the applications out of the string email.plainTextBody returned? What if you didn't know what apps might be listed (and there was no predefined set of possibilities)? All you could do is look at what came right before the list of apps started in the email template, what came right after, and then use .substringBetween() to get the application names (you might then use .split() to return a list if the app names are separated by commas or spaces).

But what the string that came before the app names changed based on what day of the week it is? You'd have to write logic to look for several possible beginnings of the app names substring. And what if nothing came after the app names because they were at the end of the email? You'd have to use .substringAfter(). And what if you didn't just want app names? What if you wanted the day an event would happen, the contact information of a group listed in the email body, and whether or not the email subject indicated "high priority"? You'd have to write logic into your Apex code for each of those substrings. That's a lot of custom logic, which you can do. But even after you're done, what would it take to break it?

One character.

If one unexpected character shows up, your Apex code will not know where to begin or end gathering a substring, and will either return something ridiculous or break with null reference error.

You will have to figure out what changed, rewrite the logic for that substring in your class (probably with cascading effect to other substrings), and redeploy.

That's why people don't like Inbound Email Services. And that's what I'm here to change.

The main problem with the model above is that it relies on hard coding values into code, which is a big no-no in terms of any development language best practices. What needs to be introduced is two Custom Settings: one to remove strings that are superfluous, and one to define the beginning and end of each sub string which will be assigned to a sobject field.

First, cast your emailbody (or subject if you want) to a string we are going to work with for the rest of the class (for example: String message = email.plainTextBody.normalizeSpace();).  Run regex on the "message" string to remove non-alphanumeric characters.

For the first Custom Setting, let's call it "Substrings_To_Remove__c".  Simply call Substrings_To_Remove__c.getAll().Values() to loop through a list of substrings where, if found, that substring will be removed from the subject or emailbody string. If the email's sender add some characters that you don't need to capture or characters that are breaking your code, just add a record on the Substrings_To_Remove__c custom setting. Then you'll be working with a clean string to begin extracting values.

After that, we need a Custom Setting to populate the sObject fields. We'll call it Fields_to_fill__c. Name a Fields_to_fill__c record for each of the sObject fields we will want to populate. Create a text field for Begining__c and End__c, and a checkbox if the substring are looking for is at the start of "message" or end of "message". Place comma seperated lists in the Begining__c and End__c fields if the value may change (based on something like day or sender). Run a Schema describe on the objects we are interested in to get a list of the field names. Then iterate through the list of fields to see if you created a record of that name exists in Fields_to_fill__c. If so, see if the Fields_to_fill__c record checkboxes indicate us of .substringBetween(), .substringBefore(), or .substringAfter(). Use Fields_to_fill__c.getvalues(fieldResult.getName()).Start__c.split(',') (or End__c) to iterate through comma separated field values and see which is true for message.contains(). You now have the necessary data to extract a substring from "message" assign that substring value to the sObject field. You can even use the substring value in Dynamic SOQL to populate a lookup field. Once the correct value is identified, assign it to the sObject & upsert (remember to bulkify). For example, for Account a with field Apps__c where fieldresult.getName() = Apps__c: a.put(fieldResult.getName(),message.substringBetween(start,the_end).trim() (.trim() will remove proceeding or trailing whitespace).

And there you go! If your email sender changes the template, you can either pluck unwanted characters out of the "message", or change what you are looking for in order to identify your substring values WITHOUT a redeployment!

Read full happening...

Set the Content Type in Header for Apex REST POST Request

Jeff Douglas (as usual) has an excellent Apex POST request example at his blog. It demonstrates how to perform callouts during trigger execution. The example is missing one thing, however, that can be critical to the ability of the service being called to interpret your request body: the Content Type.

Depending on whether you are using the JSON or XML classes to encode your data, you will need to add the Content Type attribute to the header to indicate how the service being called should read the request body. Here is an example for passing a JSON object:
req.setHeader('Content-Type','application/json; charset=utf-8');

 

Read full happening...