Course details

Apache AVRO is a very popular data serialization format in the Hadoop technology stack.

It is used widely in Hadoop stack i.e in hive,pig,mapreduce componenets.

It stores metadata also along with Actual data.

It is a rowbased oriented data storage format.

Provides schema evaluation and block compression.

Metadata will be represented in JSON file

Avro depends heavily on its schema. It allows every data to be written with no prior knowledge of the schema. It serializes fast and the resulting serialized data is lesser in size. Schema is stored along with the Avro data in a file for any further processing.

In RPC, the client and the server exchange schemas during the connection. This exchange helps in the communication between same named fields, missing fields, extra fields, etc.

Avro schemas are defined with JSON that simplifies its implementation in languages with JSON libraries.

Like Avro, there are other serialization mechanisms in Hadoop such as Sequence Files, Protocol Buffers, and Thrift.


Updated on 22 March, 2018
Courses you can instantly connect with... Do an online course on IT, Computing and Technology starting now. See all courses