cloud

Clouds That Give Way To Light

Fr. Gregory Hallam and Fr. Emmanuel Kahn preach on the Conception of the Most HolyTheotokos by the Righteous Anna, as well as the life of the Prophetess Anna, mother of the Prophet Samuel.




cloud

Surrounded by a great cloud of Witnesses




cloud

Will cloud hang over Sinner's US Open victory?

In the build-up to the US Open, eventual men's champion Jannik Sinner was cleared of fault or negligence over two failed doping tests. Yet questions remain over the case.




cloud

10 Signs to Know You’re Ready for Cloud IVR (+How to Use It)

A cloud IVR (interactive voice response) system is a call center solution that combines automated voice prompts with integrated features like payment systems and access to live agents. As cloud computing has moved from niche to mainstream, most IVR operations have followed suit by migrating from on-premises to cloud-based solutions.  […]

The post 10 Signs to Know You’re Ready for Cloud IVR (+How to Use It) appeared first on .




cloud

Working under a cloud!

In the heart of LingoVille, translator Trina was renowned for her linguistic prowess but was a bit behind in the tech world.  When her old typewriter finally gave out, she received a sleek new laptop, which came with OneDrive pre-enabled.  Initially hesitant about this “cloud magic,” she soon marvelled at the convenience of securely storing … Continue reading Working under a cloud!




cloud

Cloud players and open source collaboration

In today's keynote at OSBC RedHat's CEO Jim Whitehurst claimed that even companies like Google, Amazon and other cloud players are always collaborating .. not directly but in the form of collaboration via the various open source projects they build their offerings on.

While that's true to some extent, the reality IMO is that many of these companies end up with forks of key projects such as MySQL or Xen or use extension points to write their own core bits that are not open source and never will be. If you talk to ex-MySQL people they will tell you that while there was a lot of testing and other "low end" contributions by the community, almost no major contributions for MySQL came from random outside users. That is the general sentiment I've heard from most open source organizations, communities and projects and certainly our experience in WSO2 as well. Even in Apache, its usually people who are fairly committed to the project (either by employment, which is most common, or by personal interest/choice) who contribute meaningfully; its very rarely that you get a sizable contribution from an outsider.

In fact, the (ab)use of open source by online services companies like Google etc. is exactly why the AGPL license was created. For the uninitiated, AGPL is a viral license like GPL except that even online hosting is considered "distribution", thereby forcing service providers to ship the source code for any modifications they've done. Personally I'm not a fan of such aggressive tactics to get people to contribute (that's why ALL WSO2 software is Apache licensed) but there are many people who come from the free software mindset, in comparison to the open source mindset, of the FOSS community who are not happy with the Googles of the world not having to share any code at all even though they get a lot out of FOSS.

So IMO Jim's wrong on this- Google and Amazon and other major closed cloud platform players will NOT share anything they absolutely don't have to. As a side-effect, they will not touch any AGPL code because it will force them to be a commodity and that results in loss of key competitive advantages for them.

The FOSS movement is about giving power to the people. Cloud is a major risk for that as the cloud vendors are incentivized NOT to have a common denominator. That's why there's no freedom in the cloud without using a truly open source PaaS and building your own thing on top of it.




cloud

Cloud Native

Together with Sanjiva and the rest of the WSO2 architecture team, I've been thinking a lot about what it means for applications and middleware to work well in a cloud environment - on top of an Infrastructure-as-a-Service such as Amazon EC2, Eucalyptus, or Ubuntu Enterprise Cloud.
One of our team - Lavi - has a great analogy. Think of a 6-lane freeway/motorway/autobahn as the infrastructure. Before the autobahn existed there were forms of transport optimized first to dirt tracks and then to simple tarmac roads. The horse-drawn cart is optimized to a dirt track. On an autobahn it works - but it doesn't go any faster than on a dirt track. A Ford Model T can go faster, but it can't go safely at autobahn speeds: even if it could accelerate to 100mph it won't steer well enough at that speed or brake quickly enough.

Similarly, existing applications taken and run in a cloud environment may not fully utilize that environment. Even if systems can be clustered they may not be able to dynamically change the cluster size (elasticity). Its not just acceleration, but braking as well! We believe there are a set of these technical attributes that software needs to take account of to work well in a cloud environment. In other words - what do middleware and applications have to do to be Cloud Native.

Here are the attributes that we think are the core of "Cloud Native":

  • Distributed / dynamically wired

    In order for an application to work in a cloud environment the system must be inherently distributed by nature to support operating in a cloud. What does this mean? It must be able to have multiple nodes running concurrently that share a configuration and share any session state, as well as logging to a central log, not just dumping log files onto a local disk. Another way of putting this is that it is clusterable. There are different degrees of this: from systems that cluster up to tens of machines all the way to shared-nothing architectures that cluster to thousands or millions of nodes.

    Of course its not enough to think of a single application here either. Cloud applications are not just going to be written in a single language on a single platform in a single runtime. The result is that applications are going to have to be dynamically wired: not just able to find their session state and logger but also able to find the latest version of a remote service and use it, without being restarted, and without any limits to where that service has moved to.

  • Elastic

    If a system is distributed then it can be made to be elastic. This seems to be the attribute of cloud native that everyone thinks of first. The system should be able to scale down as well as up, based on load. The cluster has to be dynamic and a controller must be using policies to decide when to scale up and when to scale down. In order to be elastic, the controller needs to understand the underlying Infrastructure-as-a-Service (IaaS) and be able to call APIs to start and stop machine images.

  • Multi-tenant

    A cloud native application or middleware needs to be able to support multiple isolated tenants within the system. This is the ability of Software-as-a-Service to handle multiple departments or companies at once. This compares to running multiple copies of an application each in a Virtual Machine (VM). There are two main reasons why multi-tenancy is much better than just VMs. The first benefit is economics: a tenant has a minor overhead (usually just a row in a database). A whole VM is costly: it uses a lot more memory and resources, there may be license issues, and its hugely more complex to manage 1000 copies of an application than one single multi-tenant application with 1000 tenants. The second reason multi-tenancy is important is because it enables:
  • Self-service

    Self-service provisioning and management are key to getting the most out of a cloud system. If I can have an elastic tenant to myself that's cool. But if I rely on an administrator to set it up, configure it and manage it, then that isn't Software, Platform or Infrastructure "as-a-Service". It hasn't bought me faster time to market. Self-service applies at all levels - at the infrastructure level, self-service means managing your own VMs. At the platform level, self-service means managing and deploying production applications and middleware. At the software level, self-service means creating and managing your own tenant in an application.

  • Granularly metered and billed

    One essential point of cloud is pay-per-use. But that has to be granular. Pay-per-year just is not the same as pay-per-hour. Even in a private cloud, metering is essential. In a multi-tenant, elastic environment, creating a new tenant (e.g. a new app server, a new accounting system, a new CRM) is (almost) incrementally free until the point at which that tenant is used. In a normal system model the cost of creating and provisioning a system is so large (think of the meetings!) that it usually obscures the first year's running costs. In a self-service, multi-tenant, elastic system the actual usage is the real cost. Therefore understanding, metering, and monitoring that usage is essential.

  • Incrementally deployed and tested

    Applications running in the cloud need to be updated, just as any other application. But experience with our customers shows that they need to do clever things to handle new versions in a highly-scalable high-volume environment. Our largest customers typically have systems set up where they can incrementally deploy a new version of a system - side-by-side with the old one. Even once a new version is fully unit and system tested, there may be a desire to test the new version "in place" in the live cloud environment. Switching over traffic between versions is not just a binary decision - you may want to try the new version with 5% of your live load.

This list aims to characterize the real challenges in making software properly adapted to a cloud environment. I had a lot more to say about each point, but I wanted to keep this to-the-point. 

I strongly believe that it is only once a system really implements these attributes that it starts to give the full benefits of running in a cloud. And the benefits of "Cloud-Native" systems are immense: better utilization of resources, faster provisioning, better governance. Its probably a whole 'nother blog post to go into the full benefits of having cloud native software!

Have we missed any attributes? Please feel free to comment - and please post a trackback if you write a response.




cloud

WSO2 Stratos - Platform-as-a-Service for private and public cloud

Yesterday we announced something I believe is a game-changer: WSO2 Stratos. What is Stratos?

WSO2 Stratos is a complete SOA and developer platform offered as a self-service, multi-tenant, elastic runtime for private and public cloud infrastructures.
What that means is that our complete SOA platform - now enhanced with Tomcat and Webapp support - is available as a  "cloud native" runtime that you can either use on the Web (yes - you can try it out right now), on Amazon VPC, or on your own internal private cloud based on Ubuntu Enterprise Cloud, Eucalyptus and (coming soon) vmWare vSphere. It is a complete Platform-as-a-Service for private and public clouds.

I'll be writing more about Stratos over the coming weeks and months, and I'll also provide links and tweets to other Stratos blogs, but in this blog I want to simply answer three questions:

  1. I'm already talking to {vmWare, Eucalyptus, Ubuntu, Savvis, Joyent} about private cloud - what does WSO2 add that they don't have?
  2. What is the difference between Stratos and the Cloud Images that WSO2 already ships?
  3. Why would I choose WSO2 over the other vendors offering Platform-as-a-Service?
In order to answer the first question, lets look at the cloud computing space, which is most easily divided up into:
  • Infrastructure-as-a-Service (IaaS): this is where Amazon, Eucalyptus, vmWare, Saavis and Joyent play
  • Platform-as-a-Service (PaaS): Google App Engine, vmForce, Tibco Silver and now WSO2 Stratos play in this space.
  • Software-as-a-Service (SaaS): Google Apps, Google Mail, Microsoft Office Live, Salesforce, SugarOnDemand - these and many more make up the SaaS category.
To generalize wildly, most people talking about public cloud today are talking about SaaS. And most people talking about private cloud today are talking about IaaS.

SaaS is fantastic for quick productivity and low cost. WSO2 uses Google Apps, Sugar on Demand and several other SaaS apps. But SaaS doesn't create competitive advantage. Mule also uses Google Apps. They may well use Salesforce. SaaS cannot produce competitive advantage because your competitors get access to exactly the same low-cost services you do. In order to create competitive advantage you need to build as well as buy. For example, we use our Mashup Server together with our Sugar Business Messaging Adapter to provide insight and management of our pipeline that goes beyond what Sugar offers.

IaaS is of course a great basis to build apps. But it's just infrastructure. Yes - you get your VM hosted quicker. But someone has to create a useful VM. And that is where PaaS comes in. PaaS is how to speed up cloud development.

What does Stratos give you on top of an IaaS? It gives you an Application Server, Registry, Identity Server, Portal, ESB, Business Activity Monitor and Mashup Server. And it gives you these as-a-Service: completely self-service, elasticly scalable, and granularly metered and monitored. Someone in your team needs an ESB - they can provision one for themselves instantly. And because it's multi-tenant, it costs nothing to run until it gets used. How do you know how it's used? The metering and monitoring tells you exactly how much each tenant uses.

2. What is the difference between Stratos and the existing WSO2 Cloud Images?

The cloud images we started shipping in December are not Cloud Native. Stratos is Cloud Native. In practice, this means that when you log into Stratos (go on try it now) you can instantly provision your own domain, together with a set of Stratos services. This saves memory - instead of allocating a new VM and minimum half a gigabyte of memory to each new server you get a new ESB with zero extra memory cost. And it's much easier. The new ESB will automatically be governed and monitored. It's automatically elastically clustered.

3. Why would I choose WSO2 over other PaaS vendors?

Firstly, if you look at PaaS as a whole there is a huge divide between Public PaaS and Private PaaS. The public PaaS vendors simply don't offer private options. You can't run force.com or Google App Engine applications internally, even if you want to. WSO2 bridges that gap with a PaaS you can use in the public Web, on a virtual private cloud, or on premises.

The second big differentiator between WSO2 and the existing PaaS offerings is the architecture. Mostly PaaS is a way of building webapps. WSO2 offers a complete enterprise architecture - governance, business process, integration, portal, identity and mashups. And we support the common Enterprise Programming Model (not just Java, WebApp, JAX-WS, but also BPEL, XSLT, XPath, Google Gadgets, WSDL, etc). The only other PaaS that I know of that offers a full Enterprise architecture is Tibco Silver.

The third and most important differentiator is about lock-in. Software vendors love lock-in - and Cloud vendors love it even more. So if you code to Google App Engine, you are tied into Google's identity model, Google's Bigtable, etc. If you code to force.com or vmForce - you are tied to force's infrastructure services. If you code to Tibco Silver, you are tied to Tibco. WSO2 fights this in three ways:
  • No code lock-in: we use standards-based coding (WAR, JAX-WS, POJO) and Stratos is 100% Apache License Open Source.
  • No model lock-in: we use standards-based services: 
    • Identity is based on OpenID, OAuth, XACML, WS-Trust
    • Registry is based on AtomPub and REST
    • Business Process is based on BPEL, etc
  • No hosting lock-in: you can take you apps and data from our public PaaS and re-deploy internally or on your own virtual private cloud anytime you like.
I hope you found this a useful introduction to Stratos. If you want more information, contact me paul@wso2.com, or check out the Stratos website or code.





cloud

Understanding Logging in the Cloud

I recently read an interesting pair of articles about Application Logging in OpenShift. While these are great articles on how to use log4j and Apache Commons Logging, they don't address the cloud logging issue at all.

What is the cloud logging issue?

Suppose I have an application I want to deploy in the cloud. I also want to automatically elastically scale this app. In fact I'm hoping that this app will succeed - and then I'm going to want to deploy it in different geos. I'm using EC2 for starters, but I might need to move it later. Ok, so that sounds a bit YAGNI. Let's cut back the requirements. I'm running my app in the cloud, on a single server in a single geo.

I do not want to log to the local filesystem.

Why not? Well firstly if this is say EC2, then the server might get terminated and I'm going to lose my logs. If it doesn't get restarted then they are going to grow and kill my local filesystem. Either way, I'm in a mess.

I need to log my logs somewhere that is:
1) designed to support getting logs from multiple places - e.g. whichever EC2 or other instance my server happens to be hosted today
2) separate from my worker instance so when that gets stopped and started it lives
3) supports proper log rotation, etc

If I have this then it supports my initial problem, but it actually also supports my bigger requirements around autoscaling and geos.

Stratos is an open source Platform-as-a-Service foundation that we've created at WSO2. In Stratos we had to deal with this early on because we support elastic auto-scaling by default.

In Stratos 1.x we built a model based on syslog-ng. Basically we used log4j for applications to log. So just as any normal log4j logging you would do something like:


Logger  logger = Logger.getLogger("org.fremantle.myApp");
logger.warn("This is a warning");


We automatically setup the log appenders in the Stratos services to use the log4j syslog appender. When we start an instance we automatically set it up under the covers to pipe the syslog output to syslog-ng. Then we automatically collate these logs and make them available.

In Stratos 2.x we have improved this.
The syslog-ng model is not as efficient as we needed, and also we needed a better way of slicing and dicing the resulting log files.

In the Stratos PaaS we also have another key requirement - multi-tenancy. We have lots of instances of servers, some of which are one instance per tenant/domain, and some which are shared between tenants. In both cases we need to split out the logs so that each tenant only sees their own logs.

So in Stratos 2.x (due in the next couple of months) we have a simple Apache Thrift interface (and a JSON/REST one too). We already have a log4j target that pushes to this. So exactly the same code as above works in Stratos 2.x with no changes. 



We are also going to add models for non-Java (e.g. syslog, log4php, etc).

Now what happens next? The local agent on the cloud instance is setup automatically to publish to the local central log server. This takes the logs and publishes them to an Apache Cassandra database. We then run Apache Hive scripts that slice the logs per tenant and per application. These are then available to the user via our web interface and also via simple network calls. Why this model? This is really scalable. I mean really, really scalable. Cassandra can scale to hundreds of nodes, if necessary. Also its really fast. Our benchmarks show that we can write >10k entries/second on a normal server.

Summary

Logging in the cloud isn't just about logging to your local disk. That is not a robust or scalable answer. Logging to the cloud needs a proper cloud logging model. In Stratos we have built one. You can use it from Java today and from Stratos 2.0 we are adding support to publish log entries just with a simple REST interface, or a super-fast highly scalable approach with Apache Thrift.




cloud

On the Construction of Efficiently Navigable Tag Clouds Using Knowledge from Structured Web Content

In this paper we present an approach to improving navigability of a hierarchically structured Web content. The approach is based on an integration of a tagging module and adoption of tag clouds as a navigational aid for such content. The main idea of this approach is to apply tagging for the purpose of a better highlighting of cross-references between information items across the hierarchy. Although in principle tag clouds have the potential to support efficient navigation in tagging systems, recent research identified a number of limitations. In particular, applying tag clouds within pragmatic limits of a typical user interface leads to poor navigational performance as tag clouds are vulnerable to a so-called pagination effect. In this paper, a solution to the pagination problem is discussed, implemented as a part of an Austrian online encyclopedia called Austria-Forum, and analyzed. In addition, a simulation-based evaluation of the new algorithm has been conducted. The first evaluation results are quite promising, as the efficient navigational properties are restored.




cloud

Cloud Computing




cloud

An Ontology based Agent Generation for Information Retrieval on Cloud Environment

Retrieving information or discovering knowledge from a well organized data center in general is requested to be familiar with its schema, structure, and architecture, which against the inherent concept and characteristics of cloud environment. An effective approach to retrieve desired information or to extract useful knowledge is an important issue in the emerging information/knowledge cloud. In this paper, we propose an ontology-based agent generation framework for information retrieval in a flexible, transparent, and easy way on cloud environment. While user submitting a flat-text based request for retrieving information on a cloud environment, the request will be automatically deduced by a Reasoning Agent (RA) based on predefined ontology and reasoning rule, and then be translated to a Mobile Information Retrieving Agent Description File (MIRADF) that is formatted in a proposed Mobile Agent Description Language (MADF). A generating agent, named MIRA-GA, is also implemented to generate a MIRA according to the MIRADF. We also design and implement a prototype to integrate these agents and show an interesting example to demonstrate the feasibility of the architecture.




cloud

ORPMS: An Ontology-based Real-time Project Monitoring System in the Cloud

Project monitoring plays a crucial role in project management, which is a part of every stage of a project's life-cycle. Nevertheless, along with the increasing ratio of outsourcing in many companies' strategic plans, project monitoring has been challenged by geographically dispersed project teams and culturally diverse team members. Furthermore, because of the lack of a uniform standard, data exchange between various project monitoring software becomes an impossible mission. These factors together lead to the issue of ambiguity in project monitoring processes. Ontology is a form of knowledge representation with the purpose of disambiguation. Consequently, in this paper, we propose the framework of an ontology-based real-time project monitoring system (ORPSM), in order to, by means of ontologies, solve the ambiguity issue in project monitoring processes caused by multiple factors. The framework incorporates a series of ontologies for knowledge capture, storage, sharing and term disambiguation in project monitoring processes, and a series of metrics for assisting management of project organizations to better monitor projects. We propose to configure the ORPMS framework in a cloud environment, aiming at providing the project monitoring service to geographically distributed and dynamic project members with great flexibility, scalability and security. A case study is conducted on a prototype of the ORPMS in order to evaluate the framework.




cloud

Cloud Warehousing

Data warehouses integrate and aggregate data from various sources to support decision making within an enterprise. Usually, it is assumed that data are extracted from operational databases used by the enterprise. Cloud warehousing relaxes this view permitting data sources to be located anywhere on the world-wide web in a so-called "cloud", which is understood as a registry of services. Thus, we need a model of dataintensive web services, for which we adopt the view of the recently introduced model of abstract state services (AS2s). An AS2 combines a hidden database layer with an operation-equipped view layer, and thus provides an abstraction of web services that can be made available for use by other systems. In this paper we extend this model to an abstract model of clouds by means of an ontology for service description. The ontology can be specified using description logics, where the ABox contains the set of services, and the TBox can be queried to find suitable services. Consequently, AS2 composition can be used for cloud warehousing.




cloud

An architectural view of VANETs cloud: its models, services, applications and challenges

This research explores vehicular ad hoc networks (VANETs) and their extensive applications, such as enhancing traffic efficiency, infotainment, and passenger safety. Despite significant study, widespread deployment of VANETs has been hindered by security and privacy concerns. Challenges in implementation, including scalability, flexibility, poor connection, and insufficient intelligence, have further complicated VANETs. This study proposes leveraging cloud computing to address these challenges, marking a paradigm shift. Cloud computing, recognised for its cost-efficiency and virtualisation, is integrated with VANETs. The paper details the nomenclature, architecture, models, services, applications, and challenges of VANET-based cloud computing. Three architectures for VANET clouds - vehicular clouds (VCs), vehicles utilising clouds (VuCs), and hybrid vehicular clouds (HVCs) - are discussed in detail. The research provides an overview, delves into related work, and explores VANET cloud computing's architectural frameworks, models, and cloud services. It concludes with insights into future work and a comprehensive conclusion.




cloud

DeFog: dynamic micro-service placement in hybrid cloud-fog-edge infrastructures

DeFog is an innovative microservice placement and load balancing approach for distributed multi-cluster cloud-fog-edge architectures to minimise application response times. The architecture is modelled as a three-layered hierarchy. Each layer consists of one or more clusters of machines, with resource constraints increasing towards lower layers. Applications are modelled as service oriented architectures (SOA) comprising multiple interconnected microservices. As many applications can be run simultaneously, and as the resources of the edge and the fog are limited, choosing among services to run on the edge or the fog is the problem this work is dealing with. DeFog focuses on dynamic (i.e., adaptive) decentralised service placement within each cluster with zero downtime, eliminating the need for coordination between clusters. To assess the effectiveness of DeFog, two realistic applications based on microservices are deployed, and several placement policies are tested to select the one that reduces application latency. Least frequently used (LFU) is the reference service placement strategy. The experimental results reveal that a replacement policy that uses individual microservice latency as the crucial factor affecting service placement outperformed LFU by at least 10% in application response time.




cloud

Research on low voltage current transformer power measurement technology in the context of cloud computing

As IOT develops drastically these years, the application of cloud computing in many fields has become possible. In this paper, we take low-voltage current transformers in power systems as the research object and propose a TCN-BI-GRU power measurement method that incorporates the signal characteristics based on the transformer input and output. Firstly, the basic signal enhancement extraction of input and output is completed by using EMD and correlation coefficients. Secondly, multi-dimensional feature extraction is completed to improve the data performance according to the established TCN network. Finally, the power prediction is completed by using BI-GRU, and the results show that the RMSE of this framework is 5.69 significantly lower than other methods. In the laboratory test, the device after being subjected to strong disturbance, its correlation coefficient feature has a large impact, leading to a large deviation in the prediction, which provides a new idea for future intelligent prediction.




cloud

Cloud Computing: Short Term Impacts of 1:1 Computing in the Sixth Grade




cloud

Cloud as Infrastructure at the Texas Digital Library

In this paper, we describe our recent work in using cloud computing to provision digital library services. We consider our original and current motivations, technical details of our implementation, the path we took, and our future work and lessons learned. We also compare our work with other digital library cloud efforts.




cloud

Kindura: Repository services for researchers based on hybrid clouds

The paper describes the investigations and outcomes of the JISC-funded Kindura project, which is piloting the use of hybrid cloud infrastructure to provide repository-focused services to researchers. The hybrid cloud services integrate external commercial cloud services with internal IT infrastructure, which has been adapted to provide cloud-like interfaces. The system provides services to manage and process research outputs, primarily focusing on research data. These services include both repository services, based on use of the Fedora Commons repository, as well as common services such as preservation operations that are provided by cloud compute services. Kindura is piloting the use of the DuraCloud2, open source software developed by DuraSpace, to provide a common interface to interact with cloud storage and compute providers. A storage broker integrates with DuraCloud to optimise the usage of available resources, taking into account such factors as cost, reliability, security and performance. The development is focused on the requirements of target groups of researchers.




cloud

REDDNET and Digital Preservation in the Open Cloud: Research at Texas Tech University Libraries on Long-Term Archival Storage

In the realm of digital data, vendor-supplied cloud systems will still leave the user with responsibility for curation of digital data. Some of the very tasks users thought they were delegating to the cloud vendor may be a requirement for users after all. For example, cloud vendors most often require that users maintain archival copies. Beyond the better known vendor cloud model, we examine curation in two other models: inhouse clouds, and what we call "open" clouds—which are neither inhouse nor vendor. In open clouds, users come aboard as participants or partners—for example, by invitation. In open cloud systems users can develop their own software and data management, control access, and purchase their own hardware while running securely in the cloud environment. To do so will still require working within the rules of the cloud system, but in some open cloud systems those restrictions and limitations can be walked around easily with surprisingly little loss of freedom. It is in this context that REDDnet (Research and Education Data Depot network) is presented as the place where the Texas Tech University (TTU)) Libraries have been conducting research on long-term digital archival storage. The REDDnet network by year's end will be at 1.2 petabytes (PB) with an additional 1.4 PB for a related project (Compact Muon Soleniod Heavy Ion [CMS-HI]); additionally there are over 200 TB of tape storage. These numbers exclude any disk space which TTU will be purchasing during the year. National Science Foundation (NSF) funding covering REDDnet and CMS-HI was in excess of $850,000 with $850,000 earmarked toward REDDnet. In the terminology we used above, REDDnet is an open cloud system that invited TTU Libraries to participate. This means that we run software which fits the REDDnet structure. We are beginning to complete the final design of our system, and starting to move into the first stages of construction. And we have made a decision to move forward and purchase one-half petabyte of disk storage in the initial phase. The concerns, deliberations and testing are presented here along with our initial approach.




cloud

Should the “CLOUD” be Regulated? An Assessment




cloud

Would Cloud Computing Revolutionize Teaching Business Intelligence Courses?




cloud

Cloud Computing as an Enabler of Agile Global Software Development

Agile global software development (AGSD) is an increasingly prevalent software development strategy, as organizations hope to realize the benefits of accessing a larger resource pool of skilled labor, at a potentially reduced cost, while at the same time delivering value incrementally and iteratively. However, the distributed nature of AGSD creates geographic, temporal, socio-cultural distances that challenge collaboration between project stakeholders. The Cloud Computing (CC) service models of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) are similar to the aspirant qualities of AGSD as they provide services that are globally accessible, efficient, and stable, with lower predictable operating costs that scale to meet the computational demand. This study focused on the 12 agile principles upon which all agile methodologies are based, therein potentially increasing the potential for the findings to be generalized. Domestication Theory was used to assist in understanding how cloud technologies were appropriated in support of AGSD. The research strategy took the form of case study research. The findings suggest that some of the challenges in applying the agile principles in AGSD may be overcome by using CC.




cloud

Software as a Service (SaaS) Cloud Computing: An Empirical Investigation on University Students’ Perception

Aim/Purpose: This study aims to propose and empirically validate a model and investigates the factors influencing acceptance and use of Software as a Services cloud computing services (SaaS) from individuals’ perspectives utilizing an integrative model of Theory of Planned Behavior (TPB) and Technology Acceptance Model (TAM) with modifications to suit the objective of the study. Background: Even though SaaS cloud computing services has gained the acceptance in its educational and technical aspects, it is still expanding constantly with emerging cloud technologies. Moreover, the individual as an end-user of this technology has not been given the ample attention pertaining to SaaS acceptance and adoption (AUSaaS). Additionally, the higher education sector needs to be probed regarding AUSaaS perception, not only from a managerial stance, but also the individual. Hence, further investigation in all aspects, including the human factor, deserves deeper inspection. Methodology: A quantitative approach with probability multi-stage sampling procedure conducted utilizing survey instrument distributed among students from three public Malaysian universities. The valid collected responses were 289 Bachelor’s degree students. The survey included the demographic part as well as the items to measure the constructs relationships hypothesized. Contribution: The empirical results disclosed the appropriateness of the integrated model in explaining the individual’s attitude (R2 = 57%), the behavior intention (R2 = 64%), and AUSaaS at the university settings (R2 = 50%). Also, the study offers valuable findings and examines new relationships that considered a theoretical contribution with proven empirical results. That is, the subjective norms effect on attitude and AUSaaS is adding empirical evidence of the model hypothesized. Knowing the significance of social effect is important in utilizing it to promote university products and SaaS applications – developed inside the university – through social media networks. Also, the direct effect of perceived usefulness on AUSaaS is another important theoretical contribution the SaaS service providers/higher education institutes should consider in promoting the usefulness of their products/services developed or offered to students/end-users. Additionally, the research contributes to the knowledge of the literature and is considered one of the leading studies on accepting SaaS services and applications as proliferation of studies focus on the general and broad concept of cloud computing. Furthermore, by integrating two theories (i.e., TPB and TAM), the study employed different factors in studying the perceptions towards the acceptance of SaaS services and applications: social factors (i.e., subjective norms), personal capabilities and capacities (i.e., perceived behavioral control), technological factors (i.e., perceived usefulness and perceived ease of use), and attitudinal factors. These factors are the strength of both theories and utilizing them is articulated to unveil the salient factors affecting the acceptance of SaaS services and applications. Findings: A statistically positive significant influence of the main TPB constructs with AUSaaS was revealed. Furthermore, subjective norms (SN) and perceived usefulness (PU) demonstrated prediction ability on AUSaaS. Also, SN proved a statically significant effect on attitude (ATT). Specifically, the main contributors of intention are PU, perceived ease of use, ATT, and perceived behavioral control. Also, the proposed framework is validated empirically and statistically. Recommendation for Researchers: The proposed model is highly recommended to be tested in different settings and cultures. Also, recruiting different respondents with different roles, occupations, and cultures would likely draw more insights of the results obtained in the current research and its generalizability Future Research: Participants from private universities or other educational institutes suggested in future work as the sample here focused only on public sector universities. The model included limited number of variables suggesting that it can be extended in future works with other constructs such as trialability, compatibility, security, risk, privacy, and self-efficacy. Comparison of different ethnic groups, ages, genders, or fields of study in future research would be invaluable to enhance the findings or reveal new insights. Replication of the study in different settings is encouraged.




cloud

Maxthon Cloud Browser 5.2.7.5000 for PC Windows

Maxthon Cloud Browser is a powerful web browser which has a highly customizable interface. The browser has multiple tools that make your web experience more enjoyable, such as resource sniffer, screen capture tool, night mode and cloud functionality...




cloud

Autocount partners IAB LCCI to launch Asia’s first cloud accounting program

KUALA LUMPUR: AutoCount Dotcom Bhd (ADB), via its wholly-owned subsidiary Auto Count Sdn Bhd (ACSB), partnered with IAB LCCI Ltd, a collaboration formed following the Institute of Accountants and Bookkeepers’ (IAB) acquisition of the London Chamber of Commerce and Industry (LCCI) qualifications.

This agreement sets the stage for Asia’s first Cloud Accounting Certification Program, which will equip finance professionals with essential skills for the digital era.

The program will be launched on January 1, 2025, marking a significant step forward in modernising the region’s accounting landscape.

Under this collaboration, ADB will design the certification curriculum around its AutoCount Cloud Accounting software.

The syllabus will be submitted to IAB LCCI for accreditation.

IAB LCCI is regulated by the UK’s Office of Qualifications and Examinations Regulation (Ofqual), enhancing the certification’s credibility and alignment with global standards.

With LCCI’s extensive reach across Asia, the certification will be accessible through its network of educational centres and partner institutions, providing aspiring accountants with in-demand cloud accounting expertise.

ADB CEO Yan Tiee Choo said this collaboration with IAB LCCI allows the company to empower the next generation of accountants across Asia.

“Our goal is to provide a practical and accessible path to certification in cloud accounting, supporting not only recent SPM (Sijil Pelajaran Malaysia) graduates but also those seeking to upskill in a fast-changing industry.

“Together, we are paving the way for a more adaptable, technology-driven accounting workforce across the region,“ he said.

Bursa Malaysia-listed ADB is a leading provider of accounting and business software solutions.

IAB Group and IAB LCCI CEO Sarah Palmer said LCCI has been a leader in offering globally recognised qualifications for over 120 years.

“Our partnership with ADB reflects our shared commitment to advancing the accounting profession by equipping future finance professionals with relevant, high-quality skills.

“By collaborating with ADB, a pioneer in cloud accounting solutions, we ensure that this certification meets the industry’s evolving needs and helps individuals succeed in a digital-first finance sector,“ she said.

The certification offers a clear advantage for students and professionals looking to expand their accounting capabilities.

By learning on ADB’s cloud platform, candidates will gain hands-on experience in digital accounting practices, preparing them for careers in an increasingly automated finance landscape.

With the signing of this agreement, ADB solidifies its position as a leader in cloud accounting solutions and furthers its commitment to innovation in financial technology and education.

This partnership aligns with ADB’s vision to become Asia’s top business software provider, fostering a future-ready workforce and advancing the region’s digital transformation.




cloud

Major Services Worldwide Disrupted by Cloud Outage: What You Need to Know and How to Fix It

Hey Geeks! Some big news hitting the wires today. A massive tech outage has thrown a wrench into major services worldwide, messing with everything from public transport to hospitals and banks. Here's the scoop: What's Causing the Outage? The chaos seems to be linked to a software update from Crowdstrike, a well-known player in cybersecurity, providing services to Fortune 500 companies and government agencies, that affected Microsoft cloud systems. This update caused a glitch in Falcon; their cloud file protection system effectively blocke...




cloud

Alibaba Cloud disrupted after fire at Digital Realty datacenter in Singapore

A fire at a Digital Realty Singapore datacenter by a lithium-ion battery explosion disrupts Alibaba Cloud services.





cloud

Cloudflare to EU: Anti-Piracy Measures Shouldn’t Harm Privacy and Security

Cloudflare is urging the EU Commission to exclude the company from its upcoming Piracy Watch List, despite requests from several rightsholder groups for its inclusion. The American company says it's committed to addressing piracy concerns but not at the expense of user privacy and security. Instead, the European Commission should ensure that its Piracy Watch List does not become a tool for advocating policy changes.

From: TF, for the latest news on copyright battles, piracy and more.




cloud

Trump's economic agenda for his second term is clouding the outlook for mortgage rates

Donald Trump's election win is clouding the outlook for mortgage rates even before he gets back to the White House.





cloud

Leaning on Cloud Technology in Tough Times

As your morning routine becomes less about grabbing that cup of coffee from your favorite java spot on the way to the site, you may still be tasked with delivering on business as usual. 




cloud

Cloud-based gas reader

The Honeywell BW Connect attaches to GasAlertMicroclip XL and X3 detectors – as well as GasAlertMax XT II detectors – by sliding onto the charging port of the detector, and pairs to a smartphone via Bluetooth.




cloud

HiveWatch & Genea Partner to Provide Cloud-based Solutions

Genea offers a cloud-based access control and visitor management platform built on non-proprietary hardware that empowers users with the ability to monitor their buildings and provision credentials from anywhere.




cloud

Genetec Showcases Cloud & Edge Solutions at ISC West 2023

At ISC West Booth #20045Genetec Inc., a leading technology provider of unified security, public safety, operations, and business intelligence solutions, will showcase the latest version of Security Center, Streamvault EdgeTM hybrid cloud solutions, and CloudrunnerTM vehicle-centric investigation system.




cloud

Eagle Eye Networks Previews Camera Direct-to-Cloud Solution

Eagle Eye Networks, the global leader in cloud video surveillance, is previewing Eagle Eye Camera Direct Complete, as well as showcasing the newly enhanced Enterprise Edition and new artificial intelligence (AI)-powered tools for enterprise businesses at ISC West 2023.




cloud

Reach for the Clouds

Security Industry Trends




cloud

UL 827A Opens Up Cloud Service Opportunities

Many are intrigued with some of the changes that have been introduced and adopted toward consistent services through the COVID-19 pandemic — everything from changes to business logistics to actually augmenting UL standards to accommodate work-from-home (WFH) staffing.




cloud

ONVIF Introduces New Working Groups for Cloud, Metadata & Audio

ONVIF working groups are subsets of the ONVIF Technical Committee and Technical Services Committee, and any ONVIF member company at the full or contributing membership level is encouraged to participate in this work. 




cloud

Axis Cloud Connect & AXIS Device Manager Extend Receive SOC 2 Attestation

The System and Organization Controls (SOC 2) Type 1 report highlights Axis Communication’s efforts to protect customer and partner data with robust cloud security measures.




cloud

YourSix Cloud Service Provides Important Protection for IP Camera Systems

As computer and network technology continues to burrow into the devices we depend on for daily connectivity, there is a clear and present danger for all connected devices




cloud

OberCloud ABI Simplifies IP Camera Hookups & Network Monitoring

New OberCloud ABI product simplifies IP camera hookup and provides integrators with a great RMR tool.




cloud

Verkada Report: 90% of Security Leaders See Cloud Solutions as Future

Verkada’s newly released 2024 State of Cloud Physical Security report is based on insights from IT and physical security leaders across various sectors.




cloud

ESX 2024: AI, Cloud & Integrations

This year’s Electronic Security Expo (ESX) was held June 4-6 in Louisville, Ky., at the Kentucky International Convention Center. The event was host to education sessions, networking events and — of course — the expo floor.




cloud

VerkadaOne 2024: Empowering Integrators With AI-Enhanced Cloud Solutions

Verkada’s partner event in Denver brought together over 1,600 security professionals to showcase cutting-edge cloud-based solutions, AI-enabled products, plus insights into the evolving role of physical security technologies.




cloud

Cloud Solutions Begin to Soar

A dynamic landscape marked by the rising application of AI and its subsets is fostering advanced analytics and real-time monitoring, while the concurrent adoption of cloud-based solutions and edge computing underscores a shift toward scalable infrastructures.




cloud

How to Choose the Right Cloud Architecture for Your Customers

Learn about some common configurations of cloud video and questions to ask to determine the best one for your customers’ operations.