Thursday, January 12, 2017

LED bails and stumps in T20 cricket



Fig 1 : Lighting bails and Stumps (Mahendra Singh Dhoni)

This idea came to Bronte EcKermann of a mechanical industrial designer from Australia. The idea was put into action and product by Zing International and therefor its known as Zing Wicket System. This wicket systems were implemented in a club game in Australia at Adelaide. As we all know that its very difficult to umpire to figure out if the bails have left the stumps while run out or stumping, these wickets proved to be a great and fancy means to get decision right.

Only after three years of research, this research was implemented into international cricket at semi final and final match of under-19 world cup in UAE. In 20 over formats, these stumps were used in many countries by that time. One of the worlds biggest league of cricket also used these stumps that is IPL.

Fig. 2 : Zing bails made up of plastic and Equipped with LED 

The stumps and bails are equipped with low voltage batteries to light the LED as shown in above figure and give power to other components inside it like sensors and microprocessors. If the bails are dislodged from stumps contact, the sensor senses and microprocessor makes the bails and stumps to glow the LED. All these  processing is completed in 1/10000 th second which is much fast than human recognition. These are made up of plastic and is light weight just like wooden bails.

We all know how a batsman gets out. This stumps makes it easy for umpire and us to analize when bails left the stumps. Now lets know about the price of these sets of stumps and wickets. In the matter of cost, you can buy a iPhone with the cost of a set of Zing Wicket System. Its around 40 thousand USD.  



Wednesday, January 11, 2017

Hawk-Eye Technology in cricket

Cricket

The technology was first used by Channel 4 during a Test match between England and Pakistan on Lord's Cricket Ground, on 21 May 2001. It is used primarily by the majority of television networks to track the trajectory of balls in flight. In the winter season of 2008/2009 the ICC trialled a referral system where Hawk-Eye was used for referring decisions to the third umpire if a team disagreed with an LBW decision. The third umpire was able to look at what the ball actually did up to the point when it hit the batsman, but could not look at the predicted flight of the ball after it hit the batsman.
Its major use in cricket broadcasting is in analysing leg before wicket decisions, where the likely path of the ball can be projected forward, through the batsman's legs, to see if it would have hit the stumps. Consultation of the third umpire, for conventional slow motion or Hawk-Eye, on leg before wicket decisions, is currently sanctioned in international cricket even though doubts remain about its accuracy in cricket.
The Hawk-eye referral for LBW decision is based on three criteria:
  • Where the ball pitched
  • The location of impact with the leg of the batsman
  • The projected path of the ball past the batsman
In all three cases, marginal calls result in the on-field call being maintained.
Due to its realtime coverage of bowling speed, the systems are also used to show delivery patterns of bowler's behaviour such as line and length, or swing/turn information. At the end of an over, all six deliveries are often shown simultaneously to show a bowler's variations, such as slower deliveries, bouncers and leg-cutters. A complete record of a bowler can also be shown over the course of a match.
Batsmen also benefit from the analysis of Hawk-Eye, as a record can be brought up of the deliveries batsmen scored from. These are often shown as a 2-D silhouetted figure of a batsmen and colour-coded dots of the balls faced by the batsman. Information such as the exact spot where the ball pitches or speed of the ball from the bowler's hand (to gauge batsman reaction time) can also help in post-match analysis.

Tuesday, January 10, 2017

Study of Wireless network based on cloud

Wireless network based on cloud 

Introduction:
One of the latest wireless network for user is the cloud-managed wireless networks. It allows organizations to easily configure, deploy and manage networking devices over the distributed networks. Maximizing the network functionality and lowering maintenance and IT cost are the services provided by this technology. The network is cost effective and is easy to use. It is managed centrally and is controlled over the internet. As the network is based on Cloud, wide visibility and control are found with automatic reporting. This gives features of Central management, Application control, Guest Wi-Fi, Enterprise Security, Teleworker VPN and Device Management. Every system does have benefits and this cloud based networking gives rapid deployment with the benefits of self-optimizing, self-provisioning and automatic monitoring. The network is centrally managed from internet based on cloud and has a controller that provides centralized management with control applications. The access point is managed by cloud through internet. The control application allows the administrators to see the devices and applications being used on the network. It gives the function to create access control and application usage, boosting security and easing end-user experience. The architecture of this latest networking technology is fully based on cloud that provides the access and controls over the networks.

Architecture:
This network makes it easy to spread an Internet connection throughout any number of places.

Fig 1: Architecture diagram of Cloud Managed Network.
The monitoring of the wireless network can be done from anywhere in the world through internet. The architecture diagram gives the full detail of a cloud-based network using Brower-based management. It consists of wireless LAN built for management and cloud managed networking controller which both provides centrally managed tool. The setting up of this cloud based management system is much easier. The cloud managed wireless has the web UI that provides the cloud controller as shown above in the fig. The web based management UI gets the data from the cloud and monitors the activity and the devices on the networks. The networks are campus, branch office, Retail Stores, teleworker and so on. All these networks are controlled and managed by network controller based on cloud. This architecture give the advantage of increasing the speed and bandwidth from the controller if needed. Cloud based architecture also provides the Automatic monitoring and with advance alerts that can be fixed by the controller or administrator.

Advantages:

Lower cost:
This network approach is cost efficient and provides faster networks. The expenses are found to be very low as they don’t have to purchase their own equipment and software.

Fast Deployment:
One of the advantage is fast deployment as compared to installing system with own network equipment. The Utilization of new applications can be done quickly using could networking.

Productivity:
All Staffs have the access of the organization data and application from their home and it is easier to configure from anywhere. As the user can access data and application anywhere, they can do work more conveniently and accurately.

Scalability:
The capacity could be added instantly using cloud network along with the speed of internet. All these can be done just by a click using cloud from anywhere.

Mobility:
With the internet access the data and application from the network could be accessed from anywhere by the users using any device. No need to stay at a desk to work, the job could be done from anywhere by the users. This increases the productivity of the organization.

Good Security:
One of the greatest fear of any organization is security. The security is found to be great by the cloud based network. Most cloud service providers have stringent security polices for could networking.  The cloud service provider has many data protection policies. Data loss prevention, Encryption and Decryption strategy, physical security of data centers, firewall with latest malware protection are major security features provided over to the cloud data.  The network troubleshooting are done through could and saves a lot of time along with work.

Troubleshooting:
Number of problems occur in a network. The testing and troubleshooting for such problem can be done using cloud. The UI based on could enables the features for solving such kind of problem and monitoring the data and devices. Dashboard contains the troubleshooting tools.

Disadvantages:

Security:
In any IT or organization, Security plays an important role of the organization to be less vulnerable and protect the data. The sensitive data are found to be vulnerable while adopting this networking as it is based on the cloud.

Privacy:
The data privacy is major concern of any organization which could be monitored by the cloud providers or the hackers. The user or client can access their application or data from any location. The information could be compromised by different ways.

Cloud Services:
The wireless network solutions can offload their Configuration or management duties to the cloud. Let us assume, the server is down or unavailable or expired which makes the system vulnerable and the administrator couldn’t make any changes to system until services is restored.

Web based UI:
Browser based management UI are not hosted by the local Aps. This makes the system dependent on the cloud services. All the reports along with logs are on the server which are accessed by the internet in cloud based approach. If the cloud service is lost all the current stats and reports couldn’t be viewed.

Limitation of troubleshooting:
As the troubleshooting are done from cloud there could be problem with the troubleshooting when the cloud controller is gone. All the access of the web UI and CLI (command line interface) are done from cloud there is limitation on the troubleshooting.

Key service provider:
The key service provider for Cloud-based wireless network are as given below:

Open-Mesh:
Open-Mesh began in 2005 with the mission to make WIFI smarter and simpler. The Open-Mesh provides scalable, modular hardware and Powerful could management (could controller) with Ultra low cost for controllers. The service provider has 83,000 Cloud-Managed Networks worldwide.

Cisco meraki:
One of the major Cloud-based network provider is Cisco Meraki which offers 100% cloud managed for faster deployment, simplified administration, and richer visibility. The main focus is on Dedicated Security Radio, performance and high Capacity.

Aerohieve:
Aerohieve has over 22,000 end-customers around the world. The organization provides distributed, controller-less architecture to deliver unified, intelligent, simplified networks that can be cost-effectively to the users.

HP:
A well-known organization which is one of the key Service provider for Cloud-based wireless network. It provides Enterprise-class performance, Simplified management, lower costs with free firmware updates and 100% uptime for survivability in case of WAN connectivity failure.[9]
                       
Other relevant information
The cloud-based managed WLAN isn’t believed to be for everyone. The cloud-based wireless network has rapidly evolved over few years. The service provider like meraki and Aerohieve gives end to end cloud managed environment having WLAN components. In other hand other vendors gives simple cloud-managed Wireless router access with no security applications. The cloud base wireless network systems gives the feature of high-performance access points providing all the modern security measures required. Some service providers don’t give any controllers as they are not needed for those architectures. There is no hybrid model of architecture where one could have variation with network.

Conclusion:
In conclusion, Cloud-based wireless network uses Cloud and web based management functionality that move files among users which reduces cost. Implementation and equipment cost are reduced significantly providing better reliable and secured network to access it from number of places. Data and application accessibility from anywhere are significantly increase with this approach. Organization with low capital investment on network can grow fast and efficiently. The Could-based wireless network has many advantages over traditional networking. It give low cost, more efficient, fast networking access and advanced security measures with cloud. Organization having cloud-based network grows well with the application that can be access from anywhere. Cloud-based troubleshooting makes it easier for solving problem in an Organization. With above features, the network are becoming secure globally enabling users to securely access files and application etc. The cloud based network has significant advantages with easier monitoring and troubleshooting that is safer, faster, smarter and more reliable. The Service provider of cloud-based network are growing day by with many organization being shifted towards this advanced networking technique. It is believed that future of Wi-Fi lies in cloud.

Challenges on Main Memory and its system

1. Introduction/background:
The main memory is present in computer and many other devices like server, mobile and other components. It is found to be one of the most important component of the computing systems. The challenges that come with the memory are its energy consumption, cost and capacity of storing data, management and performance. The scaling in the memory system is done to understand and maintain the growth of the new applications. The memory size is increasing in demand with the development of new technologies.

2. Literature review:
Memory system has been one of the fundamental system of computing system. It stores data from the computer and other devices which are being processed by the system. The architecture of dynamic random access memory consumes energy and power. When the performance of memory is to be increased, there comes the challenges that is associated with the energy and power. The processing core and clients/ agents share the system of memory. All these makes the increase in demand of more memory and bandwidth. This leads to demand of performance and quality for the memory. Nowadays, the application requires intensive memory data and it is increasing day by day. The requirement of such memory that works with the application on real time as well as offline data. The research has been undergoing with the memory with efficient data analysis. The memory sharing is one of the technique that is being increasing with applications on chip and efficiency improvement.




The flash memory and dynamic random access memory are two technology that affect the memory systems. This scaling gives efficiency and capacity that increase the cost efficiency. Phase change memory and STT-MRAM are found to be more scalable with more bandwidth that is close to dynamic random access memory and hard disk having less power consumption. These new technologies gives unification of storage and memory systems.

Requirement of memory system:
The users with increase in time are demanding more and more memory as well the systems. With the increase in memory there is also performance and bandwidth that is to be considered for the computing world. The cost efficiency also plays important role in all these. There are other more requirements that are in demand including above and they are traditional requirements and new requirements. The two types of requirements are being described below:
Traditional requirements focuses on capacity and performance including cost efficiency. Today due to number of applications on computing systems, memory sharing has become one of the important technology and DRAM technology with scaling density. There is demand of high bandwidth long with low latency. The application that share memory are required to use number of technologies.


There are threefold for the requirements of memory and they are scalability, predictability and bandwidth efficiency. The scalability is required more in other technology than in DRAM technology. The scaling of DRAM goes from 100nm to 30nm technology node. But now a days, the device scaling and circuits give the DRAM which is below 30nm.

On the other hand, predictability is the other requirement for memory. In the older days, there was less bandwidth for shared resources and also with its less capacity.  The design concern has been focused on the performance and mitigate interface of memory and its technology.

Methods:
The research are required to do number of fundamental design on memory and computing systems. They are as done to overcome problem with scaling on DRAM, use of new technology for memory, new design for good performance and Qos for users. This part of report deals with the methods for overcoming all these problem on memory systems. The memory components brings challenges for scaling. The cooperation with different levels and layers of computer and its computing stack will give the result for such problems. These challenges varies from software to algorithm as well as the device architectures and memory, chips, processors etc. One of the method is to develop a new dynamic random access memory architecture which will be discussed and researched in this topic of report. The implementation of main memory has been done by choosing DRAM technology. This is done as it contains low latency and cost and by reducing cell size of dynamic random access memory. When further reduction is done on cell size, it is found to be more expensive because of manufacturing cost. The refresh rate is increased due to leakage. The scaling challenges are being discussed by the latest research paper by Samsung.  Cost for refreshment, latency writing and cell over time retention are three major challenges that are being found by the research for effective scaling in DRAM. The design of new dynamic random access memory requires number of issues that are as given below:
  • ·       Energy that are released after refresh and performance along with scaling should be reduced
  • ·       Bandwidth improvement in DRAM.
  • ·       At low cost, the increase in DRAM’s reliability
·       Movement in data along with processing element and DRAM should be reduced.
The reduction is refresh impact plays vital role in designing of new DRAM architecture. The refreshment of more number of cells are required for the increase in capacity of dynamic random access memory. The limitation is being observed in the recent researches over DRAM scaling and its density. The graph below gives the detail about the device capacity over energy spend on refreshing which is described as power consumption.
Fig 1: Consumed power of DRAM energy.

The other graph describes about the projected DRAM devices’ impact after being refreshed. This graph is loss in throughput on devices capacity versus time spend for refreshment. The graph is being plotted with the data collected after researching on devices capacity.

Results and findings:
The results and finding from the research above leads to the following solution direction. They are as given below:
  • ·       The interface migration makes the application slow and performance is increased.
  • ·       On slowdown of application, the qualification with control in interfaces.
Conclusion and future work:
In conclusion, the report presents different ideas and research that gives the technique for scaling and increasing main memory with system and its architecture. The basic principles responsible for memory scaling are as given as a) device, software and system cooperation. This tells that information from all the layers in the system are being interchanged for the development and scale.  b) The design of memory system is for common case rather than worse case. c) Memory levels has heterogeneity that gives optimization in number of matrices at one time. Number of ideas has been discussed in this paper.


This report shows the approach for scalable memory system and its other components that makes the system optimization. The number of other cooperation are being enabled with different levels. These levels includes microarchitecture and software and other devices for scaling of memory. The overcoming of memory scaling can be achieved using hetero-genetic in design. 

A Study On Reliability Engineering

Understanding of Reliability Engineering

Chapter 1: Introduction

The reliability engineering uses engineering concepts and its rules that is related with time. This is one of systematic approach of determining products or systems reliability that takes place during its life cycle. The industry that produces the system and products requires reliability engineering. The reliability engineering also deals with the improvement of understanding areas of pinpointing product. The failure isn’t canceled by the use of reliability but is to pinpoint the possibility of failure of system and products. This also gives the way to mitigate these actions to another.
The evaluation using reliability has number of analysis which vary with the phase of system and its life cycle. The design of system and changes could be made after the through reliability analysis. The engineer required for this analysis are quality engineer, design engineer, test engineer and reliability engineer. The industry requires reliability for following reasons and they are: Company’s Reputation, Satisfaction of customers, Cost of warranty, Analysis of cost and requirement of customers.
This paper gives the detail brief of reliability engineering and focuses on its current development of analyzing risk. In the modern industry and its technologies, it is found to be most fundamental aspect. As stated above reliability provides quantification on possibility of system failure and also gives protective measures. The number of components has equipped with the barrier that protects it from being failed to operate. These products could be system, software, hardware and human as well. The main objective is to decrease the potentiality of failure of systems. The approach that are being used in the early days are to identify all the possible event sequence that gives raise to worst case. Another approach is to determine consequences and make a framework for the safely barriers to prevent the product from being failed. 
The design of the products are made to withstand all the above list that takes it to worse cases. With this the industry could move to the way of unnecessary approach and barrier for the products. The cost of products are found to be higher with this approach to illuminate risk. As it is not useful in the modern technology of this world, the more advanced and quantitative approach has been undertaken with more accurate framework of design. It was earlier used in nuclear and aerospace as these missions involve huge amount of investments. The probabilistic risk was being used after brief research on this modern approach. The system safely has been handled properly with this probabilistic risk analysis approach. All the other sceneries are also taken into consideration with this approach which is not limited to only occur cases.
The next section describes the history of reliability engineering and its evaluation. The 3rd chapter describes the current use of reliability engineering and a case study of company. Similarly, the discussion part is about the complete discussion of traditional approach and modern and how they differ.

Chapter 2: History 

The reliability was introduced in the world in 1800s by Samuel T. Coleride. The framework were used during the early ages and it grew with the treatment of quantitative. The statistics and probability is the main theory behind the reliability engineering and this has taken its place in engineering. Mostly it was used in gambling and other purpose for prediction. At early 1900, there took practical use of this concept of reliability that was on the products. The industry needed this part of reliability to be implemented to get better result and good benefits. The vacuum tube was developed and it has number of failure which lead to research that included the reliability engineering into action. The data of failure and its main cause where recorded and on the next products, all these were taken care of. Specially, projects that were funded by military first started this concept. As the result, it lead to quantitative reliability.
The scientist and researchers in last two decades has found that there has been stunning increase in the reliability engineering. Nowadays, the modern economy has taken over the industry with better reliability of products and systems. The products on the traditional approach were given more value to products. On the other hand, today's approach gives value to the performance and services provided by the products which has enhanced the customer satisfaction. The industry has got its views changed over the decades and it has led to more attention to services. The failure isn’t the main concern but the services are.
The reliability engineering is one of the well-used and multidisciplinary scientifically. There are mainly number of question are being used to analysis the uncertainty and failure of products. The system failure is being determined and its reliable system is being developed. Along with these the design of system is most important along with its management. 

Chapter 3: Case Study 

The prediction of the reliability of the products from the industry is one of major pre-case is undertaken. The case study is based on the Hardware's Confidence interval and its prediction for reliability. This has number of purpose during development life cycle. The thermal stress as well as electrical effect can be observed using it during design stage. When the products are ready reliability is used for setting up of target on field. The field failure rate could differ from predicted to worse. This can be overcome with the idea of providing free products to be failure products by the company. In the case of warranty, similar procedure is undertaken providing new products to the customers.
The reliability prediction is required by the business that can be found in number of companies. Number of models embraced this development on it. This chapter deals with the description of reliability predication of the electronic systems. The company for the case study is D-link for digital telephone switch, internet router etc.

Approach:

There are number of approaches that are used for prediction of reliability. The sum of uncorrelated variables are added with their variance is one of the common method. The computer aided design is used to test all the condition that the processor could face and might get damaged. These CAD program are platform for automatic condition checking with manual instruction. There is database on this program that stores the entire failure component's issue and has 96% of confidentiality rate. There are number of other prediction that is followed with precise observation.

Modeling for reliability:

The reliability model is done using subassembly that gives the description about product assembly. As the number of small components are put together while assembling, it is found to be easy to predict the reliability of final products. We obtain the equation for lifetime variable for components and they are:
F(t) = F1(t)…Fn(t)                                               3.1
and
h(t)= h1(t)+…….+hn(t)                                     3.2

Here, The distribution of life is being given by F and h. The equation is applicable with independent components. There are two parts for the experiment that is performed by D-link.

Circuits boards and its composition:

The above part gives the description of product reliability prediction but here the new circuit board is taken into consideration for the D-link company. There are number of components on the circuit board and they are as given as Microprocessors, gates, oscillator, Ram, and capacitors. These newly designed's board's thermal analysis is taken and the table is given below for above products.
Device
Board Location
Tut(c)
Microprocessor
IC 1
80
Gate
IC 2.4
65
Oscillator
Y 1
45
RAM
IC 3.1
45
Capacitor
C 1-C 50
40

 Database of Component failure:

This database is constructed for the process of collecting all the information regarding failure of components. Alone with this the thorough analysis is also done for rate of failure calculation. The D-link takes the database collection in number of two stages. There are 330 components on the board that were described above. After the data collection is completed, there is thermal analysis undertaken for the components. The analysis is done using thermal analysis tools and software.

Chapter 4: Discussion

Taking place at scientific discipline in 1900s, Reliability engineering has taken critical place in industry that produces products. The traditional approach used the all necessary measure for the product to be good on the field and last long. This made the product more expensive and unnecessary products were added as well. Nowadays, there are a lot of challenges that the reliability engineering is facing. The software and the tools have made it much easier to mitigate the failure of products. The customers are provided with the new products upon failure. One of the examples in the recent time could be taken as SAMSUNG note mobile phone. The field failure by the battery has made the company to take back all these model of products and provide refund to the customers. The field failure could get worse and this is one of the real time examples.
The classic approach has been undertaken by modern reliability analysis. The accident environment is created and simulated to test the products and its extreme capability of withstanding these conditions. When these are performed, number of equipment are taking close look over the products and its temperature and other aspects as well. The framework by carlo, takes the use of history of product and its failure report. This framework makes analysis and sequences of event from the past history and design of the product over the years along with the component changes.

Even with all these advanced technology, there is risk and its challenge to reliability engineering. 

MongoDB Vs Cassandra

Comparison of Both the database in terms of Security 

1      Introduction

Both the databases are open source where one is document oriented and other one is for larger database. These database are family for NoSQL. The NoSQL database is mainly designed to increase scalability, fast storage, fast access to data and security (Anon., n.d.). This database can run on large node and is capable of achieving numbers of features that was not possible with RDBMs. There won’t be conflict on reading and writing of data at once. The data are distributed over thousands of machines and are in the form of clusters and access by nodes or routers. In this paper the comparison of both the database is done in terms of performance, storage, retrieval time, scalability, reliability and security. The database model of these database varies in terms where MongoDB is used for document store and Cassandra is used for Wide column store. Cassandra was developed in 2008 by apache software foundation and MongoDB was developed by MongoDB inc. The language that uses these database are java for Cassandra and C++ for MongoDB (Anon., n.d.). The schema free is both the database. There is no server side script for Cassandra but for MangoDB, JavaScript is used as server side.  
The requirement of all three of CAP can’t be fulfilled. The MongoDB flows CP where was AP is followed by Cassandra. CP states that some of data can be accessed and some of data could be accurate whereas AP sates that some data could be returned inaccurate. The application of Cassandra mostly covers IOT, recommendation engines, fraud detection application, playlists, product catalogs and messaging application. It is based on scalability (class) of NoSQL (Bushik, 2012). Whereas MongoDB helps businesses get transformed using harnessing the power of data that are stored. It is used by organization for startups on larger companies for creating applications that does complex tasks. The Cassandra requires minimal administration compared to MongoDB. This report presents all the aspect of both the database and its comparison is made.

2.     MongoDB

The MongoDB uses single instance operation and supports standalone. The performance provided by MongoDB is very high which is done using replica set which handles failures (MongoDB, n.d.). The cluster makes the division of large set of data and store in different machines. The high redundancy is provided combining replica set and clusters (sharded) and the data is found to be transparent to the applications. The main feature of MongoDB are as given below:
·        Iterative and fast development.
·        Data model with flexible feature.
·        Scalability with multi-datacenter.
·        Feature set that are integrated.
·        TCO is lower.
·        Commitment that is for long term.
·          Flexibility   

Data Management for MongoDB

Linear scalability
The horizontal scale out is provide by MongoDB which is cost efficient using sharding. This process is transparent to software applications. This sharding makes the data to distribute to different and multiple partitions which is also known as shards. The limitation that is occurred due to bottleneck is being solved which deployment of MongoDB in this pattern (Ellis, 2009). The complexity is reduced in this case. When the data get bigger the clustering of data is being done and the size of cluster is increased. This whole process is automatically maintained unlike other databases. There is no effort required for the application developer for sharding logic. There is also multiple sharding allowed in this database which makes it easy for developer to distribute data in the cluster at number of resources.  There is high scalability with workloads and they are as given below:
Sharding in range
As we know the MongoDB is mainly used to store documents, these documents are partitioned in number of shards which is determined by shard key and value pair. There is high possibility that if two documents have close key values being closer to each other in cluster.
Sharding Hash
The encryption used in this database is MD5 hash for document distribution. It give reliability to the data to be distributed properly in the shards (Gajendran, 2012).
Sharding zone
This provides operation of defining own rules for data placement within the shard zone cluster. This provides a range to data distributions. The data refining could be done continuously by the administrator and can change the key value for data migration (Hoberman, 2014).

2.1     Architecture of MongoDB

The diagram below gives the model of MongoDB architecture. It contains application server, configuration servers and shared MongoDB which is replica set. The components that sharded cluster has are shards, configuration servers, query routers. The data are stored into shards that has replica set and it provides data consistency and availability (Anon., n.d.). The router in the diagram is the query router, it handles the query and provides the interface with the application used by clients. This gives direct access to the data in the shard. The main operation of router is to target the data at shards and return the data to the clients. There could be number of router that gives fast access to the data and provide high availability.
The config servers’ gives feature of storing metadata that are of clusters. There is mapping of the cluster and its dataset with the shards data. These metadata are used by the routers to access the particular data in the shards. There are 3 configure servers in sharded clusters as shown in the diagram.  


Figure 1: Architecture of MongoDB

2.2     Security

During this last decade, there has been significant increase in hacking and issues with data security. By 2021, it is predicted that cybercrime might cost $6.2 trillion annually in global economy. There is always threat for the industry which is related to data security. The data plays vital role in industry for its growth and analysis of business. It is task of administrators at industry to secure all its data from being manipulated and hacked. The MongoDB consists of security measures for defending itself, controlling access to data and detection of changes in database (Anon., n.d.). The diagram below gives the overview of the security. 

Figure 2: MongoDB
There is external security measure of authentication and accessing the database. These include LDAP, Kerberos, PKI certificates and Windows Active Directory. The lightweight directory access protocol is used mostly in business computer networks which operates in distributed list (Hoberman, 2014). The computer that wants to access LDAP must be logged into the server and follow the protocol.
The authentication provides much security but there is requirement for high secured authorization services as well. In MongoDB the permission for the users could set according to access mode. It could also be used within LDAP server. The auditing is provided and it can be used by the administrators for determining and tracking access in log.
Encryption is one of the oldest and most effective measure for data security. MongoDB uses this technique for encrypting its data on the network. There is separate engine for encryption, protection of data. These building feature in MongoDB gives proper management and performance in data access and protection. The encrypted data can only be accessed by the authorized users.

3.     Cassandra

The Cassandra is column oriented database, distributed, fault tolerant, scalable and high performance (Hewitt, 2010). It is difficult to get high availability of data with big data storage therefor the data are stored in different location and portion is done. The Cassandra provides such high availability of data and there are other more feature of this database that are given below:
  • ·        Handles high amount of data (Big data)
  • ·        Access is fast and random
  • ·        Schema is variable
  • ·        The same data is seen at the same time by all the nodes.
  • ·        The processing and access of data are need to do fast.
  • ·        It requires partition of data and distribution.
  • ·        Availability is higher than other database.

All the three that is Availability, consistency and partition tolerance can’t be achieved once fully. The Cassandra gives high availability but lacks in consistency. It was developed by Avinash Lakshman for powering Facebook messaging search. In this database each and every node of the database points to the same role and it doesn’t has any change to get failed. Similarly as MongoDB, the data distribution is in clusters (Ellis, 2009). All the strategies associated with replication are flexible for configuration according to need by administrator. The designing for database is done according to distributed system so that there could be multiple data centers and larger nodes.
It is specially designed for disaster recovery. With the addition of new machine, there is significant increase in throughput for reading and writing for data. The replication of data is automatically done into number of nodes so that there could be fault-tolerance. This gives data security for cloud computing as well. The integration of hadoop including mapreduce support is on this database which supported by apache hive as well (Abramova, 2014). There is separate query language for Cassandra that is known as CQL. This is an alternative for SQL which gives an additional layer that hides detail about the database structure. The drivers are also available for java i.e. JDBC and other number of languages.

3.1     Architecture of Cassandra

The structure of Cassandra contains node, cluster, data center, table, commit log, mem-table, bloom filter (Gajendran, 2012). The architecture of Cassandra is being given in this section. Before understanding the architecture, it should be known that Cassandra was developed understanding that the system failure is likely to occur and do occur. The distribution is in peer-to-peer where all the nodes are same.
The partition of data is done automatically when writing data into the database. Hence, these is no specific place where the data could be written sequentially but data could be anywhere. The commit log gets the data at the beginning and then the data is also written in memory structure that is mem-table (Bushik, 2012). The diagram below is the architecture of Cassandra, there are two Cassandra clusters which contains web client assess and numbers nodes. The cluster configuration is provided by middle tier architecture.  
The architecture of Cassandra also supports replication of data for fault tolerance and efficiency.


Figure 3: Architecture of Cassandra

3.2     Security 

Security for any data is most important in today’s world. The industry always focus on data that can’t be manipulated and accessed by other 3rd party. The users can be created by the administrators who are given permission of accessing database. The command that is used is create user. The internal architecture of Cassandra manages the user and its password into its clustering database. The query language of its own can used to drop such users or alter then accordingly (Bushik, 2012). The permission management are in control of administrator for granting different levels of permissions to the user for accessing data. Hence for security purposes the Cassandra provides number of feature for its security and they are as given below:
3.2.1  Encryption on client to node
This is an extra security option that is provided by Cassandra. The SSL server provides high security for helping data not be to compromise. The communication with data cluster and client is maintained using SSL encryption. This is maintained independent in Cassandra. For addition security the setting of Cassandra.yaml file could be overridden in virtual machine. At the virtual machine level the configuration and protocol can be changes according to industry for more security. The SSL encryption is used for Cassandra database which is for client to node, node to node, server certification. The data is protected from the client machine side using secure socket layer. Similarly the data transfer is also protected in cluster. The generation of certification is carried out for all these protection.
3.2.2  Authentication
This database also follows the protocol for authentication which can be pluggable into Cassandra. The use of authenticator setting in Cassandra.yaml file enables the administrators for use these features. Allowallauthenticator is at the beginning by default which acts as authentication and it doesn’t require credentials. There is also passwordauthenticator for default use of authentication in Cassandra and the credentials are stored by encryption (Hewitt, 2010)
3.2.3  Authorization

The authorization can be configured in Cassandra using authorizer setting in Cassandra.yaml file. Its configured allowallauthorizer by default that doesn’t check for permission and gives all user permission to use. The Cassandra provides options for adding security and changes it according to use. It is flexible to get level of security that is required by the industry and administrators (Ellis, 2009).