3/14/2017 - 9:05 PM

What are all the things we have to follow when we create new project.?

What are all the things we have to follow when we create new project.?

difference between trait and abstract classes. 
trait provides only type parameters and it doesn't have constructors. 
but abstract classes can have constructors. If we are having only implicits and only one method, it is good to define it as a trait
since we are not gonna recieve any parameters from client. 

Ex: Frida Library.

Trait Example:

package frida
package querybuilder

  * A type class responsible for serializing values into a string format that can
  * be used in a solr query.
  * The companion object for this trait defines instances for common types such as
  * `Long`, `String`, `int` , `boolean` etc.
  * @tparam A the type that can be serialized.
trait SolrValueSerializer[A] {
    * Serialize a value into a string format that can be used in a solr query.
    * NOTE: This method is responsible for escaping any special characters.
    * @param a
    * @return returns serialized value.
  def serialize(a: A): String


object SolrValueSerializer {

  def escape(char: Char) = if (Solr.IllegalCharacters.contains(char)) "\\" + char.toString else char.toString

  def sanitized(v: String): String = {
    val escaped = v.flatMap(escape)
    if (escaped.contains(' ')) {
    else {

  implicit val stringIsSerializable: SolrValueSerializer[String] = new SolrValueSerializer[String] {
    def serialize(s: String) = sanitized(s)

  implicit val boolIsSerializable: SolrValueSerializer[Boolean] = new SolrValueSerializer[Boolean] {
    def serialize(b: Boolean) = b.toString

  implicit val longIsSerializable: SolrValueSerializer[Long] = new SolrValueSerializer[Long] {
    def serialize(b: Long) = b.toString

  implicit val intIsSerializable: SolrValueSerializer[Int] = new SolrValueSerializer[Int] {
    def serialize(b: Int) = b.toString

Abstract Example:
sealed abstract class Bound[A] extends Product with Serializable{
  def value:A

object Bound{
  final case class Inclusive[A](override val value: A) extends Bound[A]
  final case class Exclusive[A](override val value: A) extends Bound[A]

1. craete a context for your application.
   basically it should have a config file, If you are using spark, then spark context,
   If you are using some operations like kafka, then kafkaOps.
   Have a trait and case class:

trait Ctx[T <: Config] {
  def sc: SparkContext
  def cfg: T
  def kafkaOps: KafkaOps

case class Context[T <: Config](sc: SparkContext,
                                cfg: T,
                                kafkaOps: KafkaOps
                                ) extends Ctx[T]

From now on we can use context for accessing all configuration parameters, spark context and kafkaOps as well.

P.S: You can refer either DENA or Gazelle to use ctx in our code.

2. Have package level type aliases like this. It would be very helpful to give meaningful type aliases.

package object core {
  type AcctId = String
  type ProfileId = String
  type PWishId = String
  type CWishId = String
  type RecordingId = String
  type TitleId = String
  type SeriesId = String

Have your Main class extends App so that we can run this from command line.

To convert request into domain object, use JsonParser custom class like this. //User argonaut.

def transactionIdDecoder(req: Request): ApplicationK[TransactionId] =

  def jsonInput[A: DecodeJson](req: Request): ApplicationK[A] =
    for {
      start <- Util.nanoTime.liftKleisli
      res <-[A](org.http4s.argonaut.jsonOf[A]).liftKleisli
      _ <- Util
          x =>
    } yield res
  In a project, use context to have all project dependencies.
  If we want to convert requests (JSON to case classes) (back and forth), we can use lift JSON.
  Sextant is used to record JVM metrics as well.
  Hi Cody, Just a quick scala question. What s the advantage I get, using this
```final case class DvrKeyId(dvrKeyId:String) extends AnyVal
final case class Dvr(
instead of this
``` final case class Dvr(dvrKeyId:String) ```

there are a few potential benefits. One is that if you have something like `class Dvr(dvrKeyId: String, accountId: String)` then it can be easy to accidentally pass the arguments in in the wrong order. 
Having different types will give you a compile error if you accidentally switch them around. 

Another benefit is that if you are using scalacheck to generate test data, 
the `Arbitrary[String]` instance might produce random strings that aren’t valid dvr key IDs, while if you have a separate `DvrKeyId` type, you can create an `Arbitrary[DvrKeyId]` instance that generates only valid dvr key IDs (for example maybe they are only ascii characters or something).

I kind of go back and forth on whether I think that the benefit is worth it in scala. Haskell has better built-in support for this sort of thing

Sealed Trait, we don't need to all matching. Case _ is unnecessary.

The reason, why we have implicit def inside trait is => scope. Doesn't collide with others.

val isSolrEnabled = false

if (isSolrEnabled) println("Iniyan") else println("Hello") 
//Instead of if/else do fold.
isSolrEnabled.fold(t = println("Iniyan"),f = println("Hello"))

//What is the best way to handle 2 different implementations of the same functionality using config value.

//1. No need to rename the old methods. old methods are never touched.
//2. No need to do this branching logic each and every method.

//split APIs that now have an alternate solr implementation into their own partial function
val nonSolr: PartialFunction[DvrServiceRequest, DvrServiceResponse] = {
   case x: GetRecordingsRequest => processGetRecordings(x)

//define another PF defined over the same domain for the solr implementations
val solr: PartialFunction[DvrServiceRequest, DvrServiceResponse] = {
   case x: GetRecordingsRequest => GetRecordingsSolr.processGetRecordings(x)

//the PF that is actually used from above will be determined by the configuration
val configured: PartialFunction[DvrServiceRequest, DvrServiceResponse] = isSolrEnabled.fold(t = solr, f = nonSolr)

//chain the configured PF above with the other APIs that don't have an alternate solr implementation
override val handleRequest: PartialFunction[DvrServiceRequest, DvrServiceResponse] = otherApis orElse configured

If a function takes 2 arguments, take only one argument and return a fucntion which takes another argument
and returns a result. In that way it would be more modular and reusable.
val clientProvider = new ClientFacade {
    override def sendTask(ae: AuditEvent): Task[Unit] = {
      val p  = SimpleAudit.sendT(producer)//p:AuditEvent => Task[Unit]
      p(ae)//returns Task[Unit]
  //SimpleAudit.sendT(p:KafkaProducer[Array[Byte], Array[Byte]])(ae:AuditEvent):Task[Unit]
// If we need to mock things, always have trait,put method in that and implement the same instead of   
// implementing directly.

trait ClientFacade {
  def sendTask(ae: AuditEvent): Task[Unit]

  val clientProvider = new ClientFacade {
    override def sendTask(ae: AuditEvent): Task[Unit] = {
      val p  = SimpleAudit.sendT(producer)
//This is how you need to model moving forward:
1. you want to insert a string into DB. It's Side effect. so let's wrap it in Task. so 
  val func:String=>Task[Unit] = ???
2. You should lift a function to Sink. 
3. use `to` to redirect stream of events to sink.


val func:String=>Task[Unit] = s => Task.delay(println(s))
//Lift the function to sink
val mySink:Sink[Task, String] = sink.lift(func)
val myProcess:Process[Task, String] = Process.emitAll(Seq("1","2","3")).toSource
val p = myProcess to mySink

val p1 = myProcess.flatMap(x => Process.eval(Task.delay(println(x))))
If you have a Java API and if you want to delay the execution. 
(Meaning at the end of the world you can decide, whether to execute it or not)
you can use Task.delay(Java API)
to catch the exceptions and print

//Replace JAVA API in place of "Iniyan" and change the type.
val p2:Task[String] = Task.delay("Iniyan")
val p3 = p2.attempt.flatMap{
  case -\/(ex) => Task.delay(println(s"Exception is ${ex}"))
  case \/-(value) => Task.delay(println(s"Value is ${value}"))