C# and .Net Core Data Development
We have vast experience consulting onsite and remotely for decades. We work with partners, build our own solutions in areas ranging from Real Estate, Law, Cryptocurrency, and Technology.
What separates us from other providers is our;
If you have a need for software engineering in Microsoft .Net - C# then do contact us with these requirements. You benefit from a UK based software consultancy. We can understand your business needs without involving reams of other technical staff, and will get the job done.
If you are an enterprise, it may be far easier to just get the development done by us than go to a hiring agency or advertising for the job.
A library is a separate piece of functionality, reusable for multiple applications. When organisations try to build their own APIs internally, their internal business domain requirements leaks into them. Info Rhino builds agnostic libraries which are reusable and easier to abstract away.
An API is an endpoint which accepts arguments and returns information - typically XML and json data. We focus on separating functionality to facilitate reusability.
Remember, APIs and libraries are the same operations in different spaces. They may receive an input and always return an output. They are unopinionated and useful.
Most intermediate and junior level developers build everything into an application. We develop on a continual development approach - we constantly refactor, moving code out of congested libraries. This is the exact opposite of how most enterprise development evolves. Code is considered locked in early, covered by unit tests, and nobody dare change it.
Instead, we focus on building what is needed, and ensuring code is as clean and structured as possible. The cleaner the code and the more test coverage - the longer the timeframe.
Our goal in building libraries is to leave clients with reusable code. However, if a client just wants something implemented, we can happily take this approach too.
What sets Info Rhino apart from other software consultancies is our immense experience building reports, building data warehouses, implementing ETL, building and processing cubes. We have substantial experience writing PL/SQL, T-SQL, SQL, MDX, and other more esoteric querying language against document databases.
Clients can be assured that any C# development will be backed by a well-designed relational or analytic data model. We work with tooling such as SQL Server Project to ensure the database is buildable, which again gives assurance to the development and release process.
We frequently connect to APIs for retrieving data, and have written our own sophisticated scraping engines for automating handling of unstructured web content data. We are experienced in handling nested json data and moving it to more traditional relational data stores and data warehouses.
We are happy to create small applications to push data to public APIs if required.
The Software Development Lifecycle (SDLC) is a formalised process to attempt to build software as cost effectively as possible based upon the requirements customers express to the technical solutions development provider.
This approach experience significant pushback from software practitioners over multiple decades. Often it is termed as up-front design, up-front planning. Typically with waterfall, we define a project scope, break that scope down into requirements, assess requirements to provide estimates on how long the project will take. This gives the stakeholder a provisional estimate as to the costs of the project. The project manager will define project milestones to assess if the project is delivering to budget.
Whilst much of the software industry is quick to remove itself from the legacy waterfall SDLC to most external customers, this approach makes the most sense to most customers. If we build a house, we would want to have a pretty good estimation on how much it would cost to build. We would we would start with architectural drawings, use these to estimate materials cost, and finally determine how much time it will take for the project to complete.
There are many variations to implementation and the more common ones are Kanban and Agile SCRUM. Agile has been almost universally seen as the replacement to waterfall that it attempts to iterate faster and give more feedback to the customer quicker on the solution being implemented.
One of the biggest challenges with agile, almost every practitioner has a different way of thinking about what it sets out to achieve. In many situations the lack of a structured approach may lead to software of an inferior quality. Agile practitioners state that software quality is higher because it is suited much more to the customers' needs. Spend an hour or two on tech Twitter and we will see all manner of opinion on agile.
Really, we see agile as a set of lightweight tools and approaches to delivering software with less process. If in development team is spending too much time on the taking agile ceremonies and formalised approaches then the SDLC is not running in an agile manner.
Our opinion is that formalised agile when implementing medium sized software solutions is more expensive than Waterfall because it tends to build up technical debt. Where agile works well is when building Proof of Concept solutions, Business as Usual (BAU), and Run the Business (RTB) SDLCs.
We will avoid defining the CI/CD SDLC in terms of a software delivery pipeline. For those wishing to outsource software development think of it more in terms of work ongoing. The emphasis is on code and data (artefacts) being built, tested and deployed, as it is released into the integration environment. Quickly after the latest build is in the integration environment it is ready to be deployed to production. In implementations of CI/CD, there can still be manual testing, and approval processes, code quality reviews - to ensure that humans are able to make decisions.
Whilst very popular amongst thought leaders, many will be surprised to find that JBDT has been around for as long as Agile. We think of it more as an outcome based approach. We focus on a;
An example of this;
The example above it may seem a little trite and obvious but the powerful factor in JBDT is we don't need to focus on the actual technology first. It can be very useful to work with customers through a more declarative approach. Some may claim that this is Behavioural Driven Development (BDD) but it is not. BDD Pretty much maps a set of business requirements or user story to a coded implementation and the ability to plug testing at the same time. JBDT really helps to tease out the vision of a customer's requirements.
We focus on iterative development which aims to build enough functionality in structure in the first phase and then later on more functionality and structure in subsequent phases until the software is good enough to meet the needs off the customer.
Where we feel that we have a relatively involved set of steps or process is required by a system we will undergo a Jobs to be Done scoping exercise. These may not be taken to the point of defining a hard set of requirements, but more based upon identifying common understanding of how the solution should work. We do this to allow us to be flexible in terms of selecting what tool and technologies will be used to talk to collaborators and partners to get a feel for the right implementation.
We may introduce agile monitoring software to help us to keep abreast of how the solution is evolving but we don't tend to focus too much on a formalised process.
Experience of implementing solutions working for enterprise says and small businesses gives us a pretty good idea as to select the time it takes to implement software solutions. To get a feel for this we often advise companies to look on job websites where software development contractors are needed. We typically see contracts running from 12 weeks six months to one year and often these contracts are rolling. It may be possible to see that these projects are hiring for several developers, business analysts, and testers.
We rarely engage business analysts and project managers to reduce the amount of duplicated work translating requirements from one area of expertise to another. This reduces cost and increases development time.
To help manage your costs, try to come up with a basic set of requirements and we can undertake a scoping exercise where we charge a small fee just to help you get more clarity on your requirements.
One of the most challenging elements in implementing solutions across multiple states is that developers won't have the expertise across each technology. It seems to be almost universal for example that developers that specialise in front ends cannot build databases. Those developers that are better on back end server architecture prefer to write more code in that layer and in either the front end or the back end database.
We look at the solution as a whole and think in terms of which layer should the functionality beside within. We are not afraid to challenge convention and implement more generic and dynamic approaches to make implementation more effective where is other consultancy 's will try to do things in the conventional way.
Another major benefit to working with Info Rhino when implementing solutions across multiple stacks is that we will happily shield experts in their technology from the wider project artefacts. Many developers prefer to focus on the area of speciality others prefer to be involved in the solution overall. We identify talents and work with them too be at their most effective.
Certain languages and frameworks and we have used OpenAI, ChatGPT to build quite sophisticated components and code That can be enhanced or reminder supposed to type for demonstrating to different audiences and customers. You won't find many companies willing to openly admit that they actively embrace using AI based development we just want to focus more on the solution and where we see opportunity to lower complexity we happily do so.
For marketing purposes, to see whether we could drive more traffic to our website, we have started creating small videos giving an overview of our technology and core principles. We built a small dot net core application that works with both Google Text-to-Speech and Azure Cognitive Services API to take text and convert it to audio mp3 files. We can then put together simple videos using screengrabs and videos with this audio to produce YouTube marketing presentations.
For one of the major pieces of development in 2023, we faced highly challenging or potential decisions. Our Web Data Platform works mainly by data being dropped on different folders on our website on the website detecting these new data items to make them available to website visitors. Most later can be semi real time without having to continually hit databases and we make use of the .net cache.
Decisions were made which type of store data should be held within, what protocol should be used to move data to the website, How to take advantage of lower licence costs full databases etc. Additionally we have sort of identify how we go stand out the number of users on it grows.
Here are some of the technologies we have employed and are planning to use as the platform needs to scale up;
We focus on moving code to the right place as the project evolves. We only write unit tests when a complex calculation/evaluation is being performed. Indeed, we only write unit tests on our software as a means of understanding the task. It is a the way we communicate our understanding to other developers in the future.
In terms of SOLID principles, our code typically conforms to this approach, which leads to higher cohesion and lower coupling.
We use Dependency Injection and Inversion of control out of habit, but again, we don't have to.
We have worked on large calculation engines, and had to really improve performance of these large applications. We have realised performance gains through micro adjustments. We regularly put Parallel execution into our processing applications, and will implement asynchronous patterns where necessary. However, there is a lot to be said for scaling out and not overburdening the software with highly complicated code.
We always prefer convenience over performance where it is clear performance is not the main focus. An example would be linq expressions. We have removed linq to achieve performance gains, but it is rare.
What about Test Driven Development? These are all valid and worthy pursuits, however, we are a data focused company and are more focused on data and outputs. We would rather use a data testing framework such as nBI, use Selenium to run tests on a website, inject json objects into unit tests, than hand-code thousands of unit tests.
We enjoy writing unit tests, so if you have a library which needs this work undertaken on it, feel free to contact us about this.
An important point to consider, if your code adheres to SOLID principles, it will facilitate test coverage implicitly.
We are familiar with many of these patterns, but instead focus more on SOLID approaches. We will sometimes research a specific pattern or adopt our own consistent approaches to help convey meaning to our implementation.
We never set out to implement patterns for the sake of it. We would not use Hungarian notation liberally, and don't want this to get in the way of achieving the result a client needs.
Our experience is solely Windows and Server based development. All websites we deploy are on a Windows Server hosting IIS. We have explored Azure, and Amazon Web Services on numerous occasions for our clients and our own projects. If you have specific needs, we work with many experts in these areas and are more than capable of implementing your needs.
A host of new features has been added to add more services to users of our cryptocurrency platform, and these features are available within the standard Web Data Platform CMS.
We built a complete set of operations to manage analysing payment data and automate validating those messages to update member balances that paid to access website content.
There are many excellent data visualisation frameworks in JavaScript. We have build generic displays to detect data structures of certain shapes to provide more blanked access to data