Pages

SQL to read from BLOB

In some Production  Scenarios you might need to take a decision on some  Property values which are not exposed .

PEGA provides some functions and oracle java classes . Below is the syntax of the sql to read from blob

SELECT pr_read_from_stream('pzInsKey', pzInsKey,pzPVStream)as"PropName" from pc_work


Please make sure your java classes are valid before you run pr_read_from_stream

Process Mining - BPM

  • Process mining is determination of business  Operational process from event(Audit) logs using software tools
  • The advantage of process mining is to identify the deviations from standard business process and fix those outliers
  • There could be challenges in determining the business  Operational process from event(Audit) logs due to the lack of data/improper data


Heap Dump Analysis - Out Of Memory errors -PEGA

OOM - out of the memory is one of the common error that teams get  while figuring out the optimized jvm and Cache settings 

Here are the some tools will helps to analyze heap dumps.

IBM HeapAnalyzer: 
Eclipse MAT- memory analyzer tool
IBM® Support Assistant - 
(to download IBM Support Assistant, you must be a registered user on the IBM Software Support website)


heap dump formats -

PHD -Portable heap dumps
HPROF binary dumps
System dumps - core dumps



Eclipse MAT- memory analyzer tool 

you can download from the  URL https://www.eclipse.org/mat/downloads.php


you need dtfj plugin to support PHD heap dumps

http://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/runtimes/tools/dtfj/







The leak suspects the where is the problem . In below sample when you drill down you can see which class objects are causing an OOM issue 








How to externalize file attachments in PEGA -S3

How to externalize the attachments in PEGA? 


What is externalization?

By default PEGA stores the file attachments in the database tables

How to store file attachments in external storage system (S3)?


PEGA provides some forms to configure Amazon S3 connectivity and achieve the externalization of attachments in below 3 simple steps

STEP 1: Create Authentication profile

Create-> Security->Authentication Profile














STEP 2: Create a Repository and check the test connectivity
Create->Sysadmin->Repository


STEP 3: configure the storage location to AmazonS3
Open the application instance -> Integration & security tab ->content storage upload section -> select the S3 repository


























this step completes the Configuration

Validate/Test the file location
Pick a case -> Recent-content (attachment)->upload a file























Validate the file in the Amazon S3



Validate the file in the Database /Internal process?
Data-Admin-WorkAttach-File
coming soon...

Potential issues in the configuration

you are not authorized to create, modify or lock DATA-REPOSITORY











Please follow below steps to overcome this issue 





Additional points to consider :

  • Make sure the connections are secure and followed the organizational security polices
  • Make sure connection exceptions scenarios.
  • What happens if S3 goes down? ( it does in internal S3 scenario)

Limitation/More to explore

  • Currently it doesn't supports to configure On-premises S3 storage ( you need to customize)
  • how to extend for  other types attachments.
  • how to support multiple buckets in single application.

Do not tamper user Input elements

If you are not able to track what values are users inputted, application users might claim that they were selected the correct value but the application is behaving wrong. 

Application can utilize the values of user inputs in business logic but shouldn't tamper the values 

Application can allow users to edit their inputs later point of time but make sure you capture those changes for audit purposes 









ARCHIVE

Archiving is the process of moving inactive data to cheaper storage system and stored for longer period of time for audit purpose.

CICD AND DEVOPS

CICD AND DEVOPS

PEGA CUSTOMER SERVICE

PEGA CUSTOMER  SERVICE 

PEGA SMART INVESTIGATE

PEGA SMART INVESTIGATE

PEGA PLATFORM

PEGA PLATFORM

Reclaim unused Space




  • As part of archive process we delete the rows from the tables from the primary  storage once the data is moved to archive storage

  • The delete Process leaves the fragments in the tables and as the tables grows .

  • oracle provides many options to reclaim the space /defragment 
    • Redefine
    • Shrink
     but any of these features requires ample amount of data base green zones which is not viable for high availability platforms.

what are the alternatives? 

How to optimize or improve your customer support response time

How to Optimize or Improve  your customer response times ?

Response time is one of the client experience attribute

How do track your Customer tickets?

Do you use any Customer service software tools ( automated) or 
do you use manual process (paper based,outlook,gmail etc)

How does your customers interact with Support teams?

Phone OR chat OR IVR OR email ?

Hopefully 90-99% phone/Chat/IVR inquiries are resolved over the phone only.

email inquiries are challenging 

Can you make a report of current response times  and what is the quality of that data?

if you are using automated tools and you have a current response times ?
  • Document the end to end flow?
    • can you document the end to end process flow?
    • are you not able to document the E2E flow due to multiple paths
    • can you trace /  
    • Standardize your process- the document
  • Identify if there is a requirement to change business Process?
  • Identify any steps can be eliminated or automated with technology?
  • does the client support team is using the technology p





Security - Audit for operator ID changes and pytracksecuritychanges

Operator audit is so critical to to prevent  any  frauds in the organization .

  • Operator ID instances are Data instances in Pega.
  • Pega provides field level tracking feature to support such audit .
  • PyTracksecuritychnages DT (Data Transform) in the class for which you want to track the fields  and configures required fields
  • TrackSecurityChanges  Trigger(save as from OOTB)  in the class for which you want to track the fields 




How to debug AWS S3 connection handshake

S3 Connection sample  Java App2jva




log4j  settings
Debug log


log4j:ERROR Could not find value for key log4j.appender.file
log4j:ERROR Could not instantiate appender named "file".
2020-04-15 22:55:11,158 [main] DEBUG com.amazonaws.AmazonWebServiceClient -  Internal logging successfully configured to commons logger: true
2020-04-15 22:55:11,263 [main] DEBUG com.amazonaws.metrics.AwsSdkMetrics -  Admin mbean registered under com.amazonaws.management:type=AwsSdkMetrics
* srinivas Gowni
2020-04-15 22:55:12,053 [main] DEBUG com.amazonaws.request -  Sending Request: GET https://s3.us-west-2.amazonaws.com / Headers: (User-Agent: aws-sdk-java/1.11.267 Windows_10/10.0 Java_HotSpot(TM)_64-Bit_Server_VM/13.0.2+8 java/13.0.2, amz-sdk-invocation-id: b0ab5b4e-d3c5-0886-5761-85646d8b8433, Content-Type: application/octet-stream, ) 
2020-04-15 22:55:12,112 [main] DEBUG com.amazonaws.auth.AWS4Signer -  AWS4 Canonical Request: '"GET
/

amz-sdk-invocation-id:b0ab5b4e-d3c5-0886-5761-85646d8b8433
amz-sdk-retry:0/0/500
content-type:application/octet-stream
host:s3.us-west-2.amazonaws.com
user-agent:aws-sdk-java/1.11.267 Windows_10/10.0 Java_HotSpot(TM)_64-Bit_Server_VM/13.0.2+8 java/13.0.2
x-amz-content-sha256:UNSIGNED-PAYLOAD
x-amz-date:20200416T025512Z

amz-sdk-invocation-id;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date
UNSIGNED-PAYLOAD"
2020-04-15 22:55:12,112 [main] DEBUG com.amazonaws.auth.AWS4Signer -  AWS4 String to Sign: '"AWS4-HMAC-SHA256
20200416T025512Z
20200416/us-west-2/s3/aws4_request
83befc2f1a4732c9f1f61afc66a9c0b5bd2c077d34a196a4e10a1c35b6fad589"
2020-04-15 22:55:12,112 [main] DEBUG com.amazonaws.auth.AWS4Signer -  Generating a new signing key as the signing key not available in the cache for the date 1586995200000
2020-04-15 22:55:12,139 [main] DEBUG org.apache.http.client.protocol.RequestAddCookies -  CookieSpec selected: default
2020-04-15 22:55:12,161 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache -  Auth cache not set in the context
2020-04-15 22:55:12,162 [main] DEBUG org.apache.http.impl.conn.PoolingHttpClientConnectionManager -  Connection request: [route: {s}->https://s3.us-west-2.amazonaws.com:443][total kept alive: 0; route allocated: 0 of 50; total allocated: 0 of 50]
2020-04-15 22:55:12,202 [main] DEBUG org.apache.http.impl.conn.PoolingHttpClientConnectionManager -  Connection leased: [id: 0][route: {s}->https://s3.us-west-2.amazonaws.com:443][total kept alive: 0; route allocated: 1 of 50; total allocated: 1 of 50]
2020-04-15 22:55:12,205 [main] DEBUG org.apache.http.impl.execchain.MainClientExec -  Opening connection {s}->https://s3.us-west-2.amazonaws.com:443
2020-04-15 22:55:12,372 [main] DEBUG org.apache.http.impl.conn.DefaultHttpClientConnectionOperator -  Connecting to s3.us-west-2.amazonaws.com/52.218.246.72:443
2020-04-15 22:55:12,372 [main] DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory -  connecting to s3.us-west-2.amazonaws.com/52.218.246.72:443
2020-04-15 22:55:12,372 [main] DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory -  Connecting socket to s3.us-west-2.amazonaws.com/52.218.246.72:443 with timeout 10000
2020-04-15 22:55:12,534 [main] DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory -  Enabled protocols: [TLSv1.3, TLSv1.2, TLSv1.1, TLSv1]
2020-04-15 22:55:12,535 [main] DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory -  Enabled cipher suites:[TLS_AES_256_GCM_SHA384, TLS_AES_128_GCM_SHA256, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_RSA_WITH_AES_256_GCM_SHA384, TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_DHE_DSS_WITH_AES_256_GCM_SHA384, TLS_DHE_RSA_WITH_AES_128_GCM_SHA256, TLS_DHE_DSS_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
2020-04-15 22:55:12,535 [main] DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory -  socket.getSupportedProtocols(): [TLSv1.3, TLSv1.2, TLSv1.1, TLSv1, SSLv3, SSLv2Hello], socket.getEnabledProtocols(): [TLSv1.3, TLSv1.2, TLSv1.1, TLSv1]
2020-04-15 22:55:12,536 [main] DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory -  TLS protocol enabled for SSL handshake: [TLSv1.2, TLSv1.1, TLSv1, TLSv1.3]
2020-04-15 22:55:12,536 [main] DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory -  Starting handshake
2020-04-15 22:55:12,887 [main] DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory -  Secure session established
2020-04-15 22:55:12,887 [main] DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory -   negotiated protocol: TLSv1.2
2020-04-15 22:55:12,887 [main] DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory -   negotiated cipher suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
2020-04-15 22:55:12,887 [main] DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory -   peer principal: CN=*.s3-us-west-2.amazonaws.com, O="Amazon.com, Inc.", L=Seattle, ST=Washington, C=US
2020-04-15 22:55:12,887 [main] DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory -   peer alternative names: [s3-us-west-2.amazonaws.com, *.s3-us-west-2.amazonaws.com, s3.us-west-2.amazonaws.com, *.s3.us-west-2.amazonaws.com, s3.dualstack.us-west-2.amazonaws.com, *.s3.dualstack.us-west-2.amazonaws.com, *.s3.amazonaws.com, *.s3-control.us-west-2.amazonaws.com, s3-control.us-west-2.amazonaws.com, *.s3-control.dualstack.us-west-2.amazonaws.com, s3-control.dualstack.us-west-2.amazonaws.com, *.s3-accesspoint.us-west-2.amazonaws.com, *.s3-accesspoint.dualstack.us-west-2.amazonaws.com]
2020-04-15 22:55:12,887 [main] DEBUG com.amazonaws.http.conn.ssl.SdkTLSSocketFactory -   issuer principal: CN=DigiCert Baltimore CA-2 G2, OU=www.digicert.com, O=DigiCert Inc, C=US
2020-04-15 22:55:12,893 [main] DEBUG com.amazonaws.internal.SdkSSLSocket -  created: s3.us-west-2.amazonaws.com/52.218.246.72:443
2020-04-15 22:55:12,894 [main] DEBUG org.apache.http.impl.conn.DefaultHttpClientConnectionOperator -  Connection established 10.0.0.15:52944<->52.218.246.72:443
2020-04-15 22:55:12,894 [main] DEBUG org.apache.http.impl.conn.DefaultManagedHttpClientConnection -  http-outgoing-0: set socket timeout to 50000
2020-04-15 22:55:12,894 [main] DEBUG org.apache.http.impl.execchain.MainClientExec -  Executing request GET / HTTP/1.1
2020-04-15 22:55:12,894 [main] DEBUG org.apache.http.impl.execchain.MainClientExec -  Proxy auth state: UNCHALLENGED
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.headers -  http-outgoing-0 >> GET / HTTP/1.1
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.headers -  http-outgoing-0 >> Host: s3.us-west-2.amazonaws.com
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.headers -  http-outgoing-0 >> x-amz-content-sha256: UNSIGNED-PAYLOAD
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.headers -  http-outgoing-0 >> Authorization: AWS4-HMAC-SHA256 Credential=AKIA3W6II5PIX4A3QVKE/20200416/us-west-2/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=c0aacfb323499fbc957d0ab30b123ee054d4fdbdfd3e59e1848f31b41fdd9b59
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.headers -  http-outgoing-0 >> X-Amz-Date: 20200416T025512Z
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.headers -  http-outgoing-0 >> User-Agent: aws-sdk-java/1.11.267 Windows_10/10.0 Java_HotSpot(TM)_64-Bit_Server_VM/13.0.2+8 java/13.0.2
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.headers -  http-outgoing-0 >> amz-sdk-invocation-id: b0ab5b4e-d3c5-0886-5761-85646d8b8433
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.headers -  http-outgoing-0 >> amz-sdk-retry: 0/0/500
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.headers -  http-outgoing-0 >> Content-Type: application/octet-stream
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.headers -  http-outgoing-0 >> Content-Length: 0
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.headers -  http-outgoing-0 >> Connection: Keep-Alive
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.wire -  http-outgoing-0 >> "GET / HTTP/1.1[\r][\n]"
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.wire -  http-outgoing-0 >> "Host: s3.us-west-2.amazonaws.com[\r][\n]"
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.wire -  http-outgoing-0 >> "x-amz-content-sha256: UNSIGNED-PAYLOAD[\r][\n]"
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.wire -  http-outgoing-0 >> "Authorization: AWS4-HMAC-SHA256 Credential=AKIA3W6II5PIX4A3QVKE/20200416/us-west-2/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=c0aacfb323499fbc957d0ab30b123ee054d4fdbdfd3e59e1848f31b41fdd9b59[\r][\n]"
2020-04-15 22:55:12,897 [main] DEBUG org.apache.http.wire -  http-outgoing-0 >> "X-Amz-Date: 20200416T025512Z[\r][\n]"
2020-04-15 22:55:12,898 [main] DEBUG org.apache.http.wire -  http-outgoing-0 >> "User-Agent: aws-sdk-java/1.11.267 Windows_10/10.0 Java_HotSpot(TM)_64-Bit_Server_VM/13.0.2+8 java/13.0.2[\r][\n]"
2020-04-15 22:55:12,898 [main] DEBUG org.apache.http.wire -  http-outgoing-0 >> "amz-sdk-invocation-id: b0ab5b4e-d3c5-0886-5761-85646d8b8433[\r][\n]"
2020-04-15 22:55:12,898 [main] DEBUG org.apache.http.wire -  http-outgoing-0 >> "amz-sdk-retry: 0/0/500[\r][\n]"
2020-04-15 22:55:12,898 [main] DEBUG org.apache.http.wire -  http-outgoing-0 >> "Content-Type: application/octet-stream[\r][\n]"
2020-04-15 22:55:12,898 [main] DEBUG org.apache.http.wire -  http-outgoing-0 >> "Content-Length: 0[\r][\n]"
2020-04-15 22:55:12,898 [main] DEBUG org.apache.http.wire -  http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]"
2020-04-15 22:55:12,898 [main] DEBUG org.apache.http.wire -  http-outgoing-0 >> "[\r][\n]"
2020-04-15 22:55:13,112 [main] DEBUG org.apache.http.wire -  http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]"
2020-04-15 22:55:13,112 [main] DEBUG org.apache.http.wire -  http-outgoing-0 << "x-amz-id-2: iEOvH312HGna74kM+w1k4fy/zhemEphyPhzIwE0EDreNyxVXLyvR9yimqjhsxGm/9P/N1f7badc=[\r][\n]"
2020-04-15 22:55:13,113 [main] DEBUG org.apache.http.wire -  http-outgoing-0 << "x-amz-request-id: C9BDF89507649C5C[\r][\n]"
2020-04-15 22:55:13,113 [main] DEBUG org.apache.http.wire -  http-outgoing-0 << "Date: Thu, 16 Apr 2020 02:55:14 GMT[\r][\n]"
2020-04-15 22:55:13,113 [main] DEBUG org.apache.http.wire -  http-outgoing-0 << "Content-Type: application/xml[\r][\n]"
2020-04-15 22:55:13,113 [main] DEBUG org.apache.http.wire -  http-outgoing-0 << "Transfer-Encoding: chunked[\r][\n]"
2020-04-15 22:55:13,113 [main] DEBUG org.apache.http.wire -  http-outgoing-0 << "Server: AmazonS3[\r][\n]"
2020-04-15 22:55:13,113 [main] DEBUG org.apache.http.wire -  http-outgoing-0 << "[\r][\n]"
2020-04-15 22:55:13,117 [main] DEBUG org.apache.http.headers -  http-outgoing-0 << HTTP/1.1 200 OK
2020-04-15 22:55:13,117 [main] DEBUG org.apache.http.headers -  http-outgoing-0 << x-amz-id-2: iEOvH312HGna74kM+w1k4fy/zhemEphyPhzIwE0EDreNyxVXLyvR9yimqjhsxGm/9P/N1f7badc=
2020-04-15 22:55:13,117 [main] DEBUG org.apache.http.headers -  http-outgoing-0 << x-amz-request-id: C9BDF89507649C5C
2020-04-15 22:55:13,117 [main] DEBUG org.apache.http.headers -  http-outgoing-0 << Date: Thu, 16 Apr 2020 02:55:14 GMT
2020-04-15 22:55:13,117 [main] DEBUG org.apache.http.headers -  http-outgoing-0 << Content-Type: application/xml
2020-04-15 22:55:13,117 [main] DEBUG org.apache.http.headers -  http-outgoing-0 << Transfer-Encoding: chunked
2020-04-15 22:55:13,117 [main] DEBUG org.apache.http.headers -  http-outgoing-0 << Server: AmazonS3
2020-04-15 22:55:13,125 [main] DEBUG org.apache.http.impl.execchain.MainClientExec -  Connection can be kept alive for 60000 MILLISECONDS
2020-04-15 22:55:13,188 [main] DEBUG com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser -  Sanitizing XML document destined for handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListAllMyBucketsHandler
2020-04-15 22:55:13,189 [main] DEBUG org.apache.http.wire -  http-outgoing-0 << "17d[\r][\n]"
2020-04-15 22:55:13,189 [main] DEBUG org.apache.http.wire -  http-outgoing-0 << "<?xml version="1.0" encoding="UTF-8"?>[\n]"
2020-04-15 22:55:13,190 [main] DEBUG org.apache.http.wire -  http-outgoing-0 << "<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>5e4e0e69f6bd199d4ba035a34bdb7d7a5a4f62092083e55d5f31878a7ef57b1e</ID><DisplayName>srinug13</DisplayName></Owner><Buckets><Bucket><Name>srinipegaattachments</Name><CreationDate>2020-03-04T22:11:56.000Z</CreationDate></Bucket></Buckets></ListAllMyBucketsResult>[\r][\n]"
2020-04-15 22:55:13,190 [main] DEBUG org.apache.http.wire -  http-outgoing-0 << "0[\r][\n]"
2020-04-15 22:55:13,190 [main] DEBUG org.apache.http.wire -  http-outgoing-0 << "[\r][\n]"
2020-04-15 22:55:13,191 [main] DEBUG org.apache.http.impl.conn.PoolingHttpClientConnectionManager -  Connection [id: 0][route: {s}->https://s3.us-west-2.amazonaws.com:443] can be kept alive for 60.0 seconds
2020-04-15 22:55:13,192 [main] DEBUG org.apache.http.impl.conn.PoolingHttpClientConnectionManager -  Connection released: [id: 0][route: {s}->https://s3.us-west-2.amazonaws.com:443][total kept alive: 1; route allocated: 1 of 50; total allocated: 1 of 50]
2020-04-15 22:55:13,192 [main] DEBUG com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser -  Parsing XML response document with handler: class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListAllMyBucketsHandler
2020-04-15 22:55:13,203 [main] DEBUG com.amazonaws.request -  Received successful response: 200, AWS Request ID: C9BDF89507649C5C
2020-04-15 22:55:13,203 [main] DEBUG com.amazonaws.requestId -  x-amzn-RequestId: not available
2020-04-15 22:55:13,203 [main] DEBUG com.amazonaws.requestId -  AWS Request ID: C9BDF89507649C5C
Your Amazon S3 buckets are:
* srinipegaattachments


Pega and Unstructured Data

Characteristics of Unstructured Data:

  • Data Can't be stored in the form of rows and columns  
  • No format and sequence 
  • Variable sizes /Not fixed size and  limitation

Samples of Unstructured Data:

  • email data
  • Files attachments
  • Notes and memos

Advantages :

  • Flexible and User convenient
  • scalable

Disadvantages and Challenges :

  • difficult to store and Manage
  • Difficult to search from Unstructured data
  • Hard to analyze 

Pega and Unstructured Data:

Pega does Stores the Unstructured data in Application properties but due to oracle column limitation It can't store it as a column values but it store the information in blobs

due to the nature of unstructured data the database tables grows drastically so recommendation is to externalize  file attachments to scalable data storage systems like S3

GCPolicy and High CPU usage and PEGA0028

GC Policy

we have changed GCPolicy from gencon to Balanced and we experience High CPU usage

and from the analysis we  figured it that low JIT cahce caused the issue