Concepts or coding lessons of Salesforce that you can implement easily

Check out Salesforce Spring '17 Sandbox Preview Instructions

The Salesforce Spring ‘17 release is near now. You will able to take advantage of new functionality. 
Sandbox customers have an opportunity to access and test these new changes.

The Spring ‘17 Sandbox Preview window is scheduled to begin 
January 2017. If you would like your Sandbox organization to take part in the Spring ‘17 Preview, your Sandbox must be active on a preview instance by 6 January 2017 to take part in an overall instance upgrade.

Important points :

  • The Sandbox Preview window for Spring ‘17 is scheduled to begin 6 January 2017.
  • If you decide to stay on the current Winter ‘17 release, your Sandbox will not be upgraded to Spring ‘17 until 11 February 2017.
Check out the Salesforce Spring '17 Sandbox Preview Instructions for more information.

Known Issues reported by Salesforce users in Spring `17:

Recently I got a chance to work on Batch job. From batch job, we have to do a mass update in Opportunity record based on filter conditions. When I ran batch job from developer console with batch size 1, I found one issue which I reported to Salesforce.

An issue is: Batch Apex job finishes unexpectedly in Spring'17 without processing all specified records.

In Spring'17 apex batch class (Database.Batchable) doesn't process all the batches if batch execute method takes a long time to run but within limits. After some number of processed records, It stops processing batches and finishes unexpectedly. And shows Apex Job status to Completed. There is No error in Debug log nor reported, no exception.

Salesforce team resolved this issue many of instances. If you want to reproduce this issue into your org then use below code:

1. Create sample apec batch apex which will processes 1000 records: 

global class OpportunityBatch implements Database.Batchable<sObject>,Database.Stateful 

private Integer callCount = 0; 
global Database.QueryLocator start(Database.BatchableContext BC) 

String query = 'SELECT Id,Name FROM Opportunity LIMIT 1000 '; 
return Database.getQueryLocator(query); 


global void execute(Database.BatchableContext BC, List<Opportunity> scope) 

for ( Opportunity objOpp : scope) 

//This code simply loads CPU 
for (Integer i=0; i<2500; i++) { 
Blob exampleIv = Blob.valueOf('Example of batch job'); 
Blob key = Crypto.generateAesKey(128); 
Blob data = Blob.valueOf('Data to be encrypted' + string.valueof(i)); 
Blob encrypted = Crypto.encrypt('AES128', key, exampleIv, data); 

Blob decrypted = Crypto.decrypt('AES128', key, exampleIv, encrypted); 
String decryptedString = decrypted.toString(); 


callCount ++; 

global void finish(Database.BatchableContext BC) 

System.debug('Batch finished, call count: ' + callCount); 




2. Run the batch job from Developer console with batch size 1: 
Id batchJobId = Database.executeBatch(new OpportunityBatch (), 1); 

3. After batch job finishes check the last debug log, you will see that not all batches had been processed, internal callCount variable in the example class above has a value which is less than 1000: 


Batch finished, call count: 657

Workaround: There is one around which increase your manual task. I did the same. After completion of a batch job, run a query which you are using in Batch class and check how many records are pending for an update. If it shows 0 count then you are ok with the batch job. If not then you have to run batch job again.

If you are facing this issue in your instance then Log a case with salesforce.

Next post: Export VisualForce Data into Excel Sheet in 3 Easiest Steps

loading...

No comments:

Post a Comment