Search in
Sort by:

Question Status:

Search help

  • Simple searches use one or more words. Separate the words with spaces (cat dog) to search cat,dog or both. Separate the words with plus signs (cat +dog) to search for items that may contain cat but must contain dog.
  • You can further refine your search on the search results page, where you can search by keywords, author, topic. These can be combined with each other. Examples
    • cat dog --matches anything with cat,dog or both
    • cat +dog --searches for cat +dog where dog is a mandatory term
    • cat -dog -- searches for cat excluding any result containing dog
    • [cats] —will restrict your search to results with topic named "cats"
    • [cats] [dogs] —will restrict your search to results with both topics, "cats", and "dogs"

Use of AWS (Amazon Web Services) to Build Lightmass

So I am attempting to use an EC2 instance on Amazon Web Services to add some processing power to my LightMass builds. This was referenced here, among other places. Like the OP on that page, I have the issue that my local machine doesn't have the chops to actually do the lighting build I'm trying to run by itself (I left it running all day and after 8 hours it had not even completely built the scene and had not even started on the actual lighting). To be fair, I am building an 8km² densely foliated and highly detailed outdoor level.

I was able to set up an EC2 instance, move the DotNET folder over and run SwarmAgent on the server. I was also able to get the SwarmAgent instances on both my local and the remote machine to show up in the coordinator. I made sure both machines were in the same group, and both had "*" for the allowed remote agents name.

However, when I kick off a light build ONLY the local agent was used. This was true even when I set "avoid local execution" to True and ran the remote agent on an instance that had significantly more processors and memory than my local machine.

To make sure it wasn't merely impatience, I started a new project with a smaller level thinking that perhaps it was just that if it had time to do the full export to Swarm it might use the remote machine; however even in this case the light build used ONLY my local machine. I had swarm coordinator running on my local machine and swarm agent running on both machines when I kicked off the build.

Both machines can ping eachother and both machines have ports 8008 and 8009 open to both TCP and UDP traffic. At one point I even had both machines connected by OpenVPN and able to ping eachother by hostname.

So this leaves me with several questions I need answered.

First, it was mentioned in another thread on lightmass that you can break up your lighting builds into smaller chunks. How would I do that?

Second, how do I get the AWS cloud servers to participate in the lightmass build? What files does the remote machine NEED to run the build?

What determines whether or not a remote agent participates in a lightmass build?

Product Version: UE 4.8
more ▼

asked Dec 26 '17 at 01:10 PM in Rendering

avatar image

46 2 4 7

avatar image Ironbelly Dec 02 '15 at 02:36 PM

I am running into the same problem. All of the machines are able to ping each other, each machine shows up in the Swarm Coordinator so they are all talking to each other but when I kick off a build, even with 'avoid local execution' set to true, it only uses the current machine. I am running Networx on the master machine and I can see the outgoing data spike as soon as I do this so it looks like it's sending out the data but the other machines sit idle doing nothing. Help would be greatly appreciated here

avatar image RuBa1987 Sep 04 '16 at 12:27 AM

I'm pretty sure the problem here is that when the coordinator sends back an IP for the machine that will be doing the work (one of the other agents) it sends back the "private ip" for your EC2 instance. Go into your swam agent on your local machine and up the log level to something like extra verbose. You should see a line in there that says something like "Trying to open a remote connection to ...". There will be an IP address in that line, if you try to ping it you won't be able to because it's internal to Amazon. Compare that to your private IP address on your EC2 instance, it should be the same.

avatar image TomShirk Sep 06 '16 at 06:29 PM

That's pretty helpful. Do you know of any workarounds to that issue?

avatar image RuBa1987 Sep 06 '16 at 08:19 PM

I just set up a VPN.

1) create a new AWS instance for your VPN (I used Ubuntu) 2) connect your computer to the VPN 3) connect your lighting build server(s) to the VPN 4) connect your coordinator to the VPN

That's really about it. You need the servers that will be running the swarm agents to have the right stuff installed but other than that you should be good to go. I was able to get it working.

You can go another route with this by setting up a VPC that allows hardware access but I didn't see any reason to do that for what our team needs. You still need to setup the VPN anyway

avatar image nymets1104 Feb 01 '17 at 04:53 AM

Hello, curious is you ever got this to work? I would also like to know the specs or at least the number of cores you had on your EC2 instance? I have found that when connecting a computer through swarm agent over LAN, if the coordinator detects the slightest amount of CPU usage from the client, it will not be used in the build. I would also like to know if you were using the public IPs of your local and remote machines or the private one prior to using a VPN Connection?

avatar image RuBa1987 Feb 03 '17 at 05:40 PM

Yea, got this working and it works pretty well. We just use what we can get on the spot market, normally something in the c4 range. We build lighting once or twice a month and only 1 of us builds it so it's not a huge deal for us to have to go get an instance.

In terms of swarm thinking your computers are being used, just adjust the settings of the agent. In one of the drop downs it says something about developer options or config or something like that. Change the tolerance in there. I set mine up to run no matter what the computer(s) are doing since they are only for lighting

avatar image TomShirk Feb 27 '18 at 03:59 PM

Any chance we could trouble you to create some good documentation/tutorials, RuBa?

avatar image xaviprz Dec 12 '17 at 08:54 PM

Is there a doccumentation on how to setup the AWS from the beginning? I want to use it for lightmass build.

avatar image TomShirk Dec 27 '17 at 03:38 PM

Not that I've been able to find; my purpose in starting this thread was to try to build such a documentation.

avatar image TomShirk Dec 27 '17 at 03:39 PM

So far it comes to: Create AWS compute instances to run the build Install Lightmass on those instances Create a VPN service instance in AWS Join that VPN from your computer and from each of your build instances ... Profit!

avatar image Fisher007 Jan 23 '18 at 01:40 PM

I tried using AWS but failed miserably. First I wanted to have everything running on the VM, because having Lightmass communicate over the Internet can be problematic. But I couldn't run UE4 as it gave an error, saying that DirectX 11 level 10 is required. I tried to install DX in every way I found on Google but had no luck. The VM was running Windows Server 2016. Then I tried the above mentioned way, running only a Swarm client, but unfortunately Windows Server doesn't have dotNet 3.5 and installing it was very problematic. 90% of the online solutions I found was requiring the install DVD, which is obviously not available with a cloud VM. Not sure if that or something else was the reason, but modifying the settings in the Swarm client gave errors and ultimately wasn't able to connect to the Coordinator. Too bad, I would have been really interested to see how this machine performs, it was a g3.4xlarge with 16 virtual cores.

avatar image TomShirk Jan 23 '18 at 07:20 PM

I think that there might be an issue there with using the right kind of EC2 instance ...the fact that you were running on a version of Windows Server screams to me "not really designed for graphics-intensive applications!".

You'd need to make sure your EC2 instance meets UE4's system requirements...

avatar image Judge Axl Mar 08 '18 at 06:48 AM

The installation media needs to be created from a snapshot and attached to the EC2. See instructions here: https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/windows-optional-components.html

avatar image TomShirk Apr 23 '19 at 01:59 PM

That helps a little, I think, but still doesn't bring us there. I am thinking I would like to create - since no one else seems to have done so - a detailed, step-by-step how-to of setting up AWS clusters for doing UE4 lightmass builds. There's still more questions that need to be answered to get that to work, though!

avatar image Ironbelly Apr 23 '19 at 04:41 PM

Honestly your time would be better spent setting up the GPU lightmass baker and working with that. A single high end gpu will outperform 100 cpu instances without breaking a sweat, it's crazy

avatar image TomShirk Apr 23 '19 at 04:43 PM

Do you have a link on how to do that...?

avatar image Ironbelly Apr 23 '19 at 04:45 PM

It's on the forums

(comments are locked)
10|2000 characters needed characters left
Viewable by all users

-1 answers: sort voted first
Your answer
toggle preview:

Up to 5 attachments (including images) can be used with a maximum of 5.2 MB each and 5.2 MB total.

Follow this question

Once you sign in you will be able to subscribe for any updates here

Answers to this question